XSLT存档  

不及格的程序员-八神

 查看分类:  ASP.NET XML/XSLT JavaScripT   我的MSN空间Blog

The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile?

Windows’ memory management—specifically its use of RAM and the pagefile—has been a subject of concern and confusion since NT 3.1 first shipped. To be sure, there is some reason for concern. We worry about RAM because we know if there isn’t enough, the system will slow down, and will page more to disk. And that’s why we worry about the page file.

(There is also reason for confusion. Memory management in any modern operating system is a complex subject. It has not been helped by Microsoft’s ever-changing choice of nomenclature in displays like Task Manager.)

Today, RAM is just unbelievably cheap by previous standards. And Task Manager’s displays have gotten a lot better. That “memory” graph really does show RAM usage now (in Vista and 7 they made it even more clear: “Physical Memory Usage”), and people are commonly seeing their systems with apparently plenty of what Windows calls “available” RAM. (More on that in a later article.) So users and admins, always in pursuit of the next performance boost, are wondering (not for the first time) if they can delete that pesky old page file. After all, keeping everything in RAM just has to be faster than paging to disk, right? So getting rid of the page file should speed things up! Right?

You don’t get any points for guessing that I’m going to say “No, that’s not right.”

You see, eliminating the page file won’t eliminate paging to disk. It likely won’t even reduce the amount of paging to disk. That is because the page file is not the only file involved in virtual memory! Not by far.

Types of virtual memory

There are three categories of “things” (code and data) in virtual memory. Windows tries to keep as much of all them in RAM as it can.

Nonpageable virtual memory

The operating system defines a number of uses of virtual memory that are nonpageable. As noted above, this is not stuff that Windows “tries to keep in RAM”—Windows has no choice; all of it must be in RAM at all times. These have names like “nonpaged pool,” “PFN database,” “OS and driver code that runs at IRQL 2 or above,” and other kernel mode data and code that has to be accessed without incurring page faults. It is also possible for suitably privileged applications to create some nonpageable memory, in the form of AWE allocations. (We’ll have another blog post explaining AWE.) On most systems, there is not much nonpageable memory.

(“Not much” is relative. The nonpageable memory alone on most Windows systems today is larger than the total RAM size in the Windows 2000 era!)

You may be wondering why it’s called “virtual memory” if it can’t ever be paged out. The answer is that virtual memory isn’t solely about paging between disk and RAM. “Virtual memory” includes a number of other mechanisms, all of which do apply here. The most important of these is probably address translation: The physical—RAM—addresses of things in nonpageable virtual memory are not the same as their virtual addresses. Other aspects of “virtual memory” like page-level access protection, per-process address spaces vs. the system-wide kernel mode space, etc., all do apply here. So this stuff is still part of “virtual memory,” and it lives in “virtual address space,” even though it’s always kept in RAM.

Pageable virtual memory

The other two categories are pageable, meaning that if there isn’t enough RAM for everything to stay in RAM all at once, parts of the memory in these categories (generally, the parts that were referenced longest ago) can be kept or left out on disk. When it’s accessed, the OS will automatically bring it into RAM, possibly pushing something else out to disk to make room. That’s the essence of paging. It’s called “paging,” by the way, because it’s done in terms of memory “pages,” which are normally just 4K bytes… although most paging I/O operations move many pages at once.

Collectively, the places where virtual memory contents are kept when they’re not in RAM are called “backing store.” The second and third categories of virtual memory are distinguished from each other by two things: how the virtual address space is requested by the program, and where the backing store is.

Committed memory

One of these categories is called “committed” memory in Windows. Or “private bytes,” or “committed bytes,” or ‘private commit”, depending on where you look. (On the Windows XP Task Manager’s Performance tab it was called “PF usage,” short for “page file usage,” possibly the most misleading nomenclature in any Windows display of all time.) In Windows 8 and Windows 10’s Task Manager “details” tab it’s called “Commit size.”

Whatever it’s called, this is virtual memory that a) is private to each process, and b) for which the pagefile is the backing store. This is the pagefile’s function: it’s where the system keeps the part of committed memory that can’t all be kept in RAM.

Applications can create this sort of memory by calling VirtualAlloc, or malloc(), or new(), or HeapAlloc, or any of a number of similar APIs. It’s also the sort of virtual memory that’s used for each thread’s user mode stack.

By the way, the sum of all committed memory in all processes, together with operating-system defined data that is also backed by the pagefile (the largest such allocation is the paged pool),  is called the “commit charge.” (Except in PerfMon where it’s called “Committed bytes” under the “memory” object. ) On the Windows XP Task Manager display, that “PF usage” graph was showing the commit charge, not the pagefile usage.

A good way to think of the commit charge is that if everything that was in RAM that’s backed by the pagefile had to be written to the pagefile, that’s how much pagefile space it would need.

So you could think of it as the worst case pagefile usage. But that almost never happens; large portions of the committed memory are usually in RAM, so commit charge is almost never the actual amount of pagefile usage at any given moment.

Mapped memory

The other category of pageable virtual memory is called “mapped” memory. When a process (an application, or anything else that runs as a process) creates a region of this type, it specifies to the OS a file that becomes the region’s backing store. In fact, one of the ways a program creates this stuff is an API called MapViewOfFile. The name is apt: the file contents (or a subset) are mapped, byte for byte, into a range of the process’s virtual address space.

Another way to create mapped memory is to simply run a program. When you run an executable file the file is not “read,” beginning to end, into RAM. Rather it is simply mapped into the process’s virtual address space. The same is done for DLLs. (If you’re a programmer and have ever called LoadLibrary, this does not “load” the DLL in the usual sense of that word; again, the DLL is simply mapped.) The file then becomes the backing store—in effect, the page file—for the area of address space to which it is mapped. If all of the contents of all of the mapped files on the system can’t be kept in RAM at the same time, the remainder will be in the respective mapped files.

This “memory mapping” of files is done for data file access too, typically for larger files. And it’s done automatically by the Windows file cache, which is typically used for smaller files. Suffice it to say that there’s a lot of file mapping going on.

With a few exceptions (like modified pages of copy-on-write memory sections) the page file is not involved in mapped files, only for private committed virtual memory. When executing code tries to access part of a mapped file that’s currently paged out, the memory manager simply pages in the code or data from the mapped file. If it ever is pushed out of memory, it can be written back to the mapped file it came from. If it hasn’t been written to, which is usually the case for code, it isn’t written back to the file. Either way, if it’s ever needed again it can be read back in from the same file.

A typical Windows system might have hundreds of such mapped files active at any given time, all of them being the backing stores for the areas of virtual address space they’re mapped to. You can get a look at them with the SysInternals Process Explorer tool by selecting a process in the upper pane, then switching the lower pane view to show DLLs.

So…

Now we can see why eliminating the page file does not eliminate paging to and from disk. It only eliminates paging to and from the pagefile. In other words, it only eliminates paging to and from disk for private committed memory. All those mapped files? All the virtual memory they’re mapped into? The system is still paging from and to them…if it needs to. (If you have plenty of RAM, it won’t need to.)

The following diagram shows, in greatly oversimplified and not-necessarily-to-scale fashion, the relationship between virtual address space, RAM, and the various backing stores. All of nonpageable virtual space is, of course, in RAM. Some portion of the private committed address space is in RAM (“resident”); the remainder is in the pagefile. Some portion of the mapped address space is also in RAM; the remainder being in all the files to which that address space is mapped. The three mapped files—one .dat, one .dll, one .exe—are, of course, representative of the hundreds of mapped files in a typical Windows system.

A matter of balance

So that’s why removing the pagefile doesn’t eliminate paging. (Nor does it turn off or otherwise get rid of virtual memory.) But removing the pagefile can actually make things worse. Reason: you are forcing the system to keep all private committed address space in RAM. And, sorry, but that’s a stupid way to use RAM.

One of the justifications, the reason for existence, of virtual memory is the “90-10” rule (or the 80-20 rule, or whatever): programs (and your system as a whole) spend most of their time accessing only a small part of the code and data they define. A lot of processes start up, initialize themselves, and then basically sit idle for quite a while until something interesting happens. Virtual memory allows the RAM they’re sitting on to be reclaimed for other purposes until they wake up and need it back (provided the system is short on RAM; if not, there’s no point).

But running without a pagefile means the system can’t do this for committed memory. If you don’t have a page file, then all private committed memory in every process, no matter how long ago accessed, no matter how long the process has been idle, has to stay in RAM—because there is no other place to keep the contents.

That leaves less room for code and data from mapped files. And that means that the mapped memory will be paged more than it would otherwise be. More-recently-accessed contents from mapped files may have to be paged out of RAM, in order to have enough room to keep all of the private committed stuff in. Compare this diagram with the one previous:

Now that all of the private committed v.a.s. has to stay resident, no matter how long ago it was accessed, there’s less room in RAM for mapped file contents. Granted, there’s no pagefile I/O, but there’s correspondingly more I/O to the mapped files. Since the old stale part of committed memory is not being accessed, keeping it in RAM doesn’t help anything. There’s also less room for a “cushion” of available RAM. This is a net loss.

You might say “But I have plenty of RAM now. I even have a lot of free RAM. However much of that long-ago-referenced private virtual memory there is, it must not be hurting me. So why can’t I run without a page file?”

“Low on virtual memory”; “Out of virtual memory”

Well, maybe you can. But there’s a second reason to have a pagefile:

Not having a pagefile can cause the “Windows is out of virtual memory” error, even if your system seems to have plenty of free RAM.

That error pop-up happens when a process tries to allocate more committed memory than the system can support. The amount the system can support is called the “commit limit.” It’s the sum of the size of your RAM (minus a bit to allow for the nonpageable stuff) plus the current size of your page file.

All processes’ private commit allocations together, plus some of the same stuff from the operating system (things like the paged pool), are called the “commit charge.” Here’s where you can quickly see the commit charge and commit limit on windows 8 and 10:

 

Note: In Performance Monitor, these counters are called Memory\Committed bytes and Memory\Commit Limit. Each process’s contribution to the commit charge is in Process\(process)\Private Bytes. The latter is the same counter that Task Manager’s Processes tab (Windows 7) or Details tab (Windows 8 through 10) calls Commit Size.

When any process tries to allocate private virtual address space, Windows checks the size of the requested allocation plus the current commit charge against the commit limit. If the commit limit is larger than that sum, the allocation succeeds; if the commit limit is smaller than that sum, then the allocation cannot be immediately granted. But if the pagefile can be expanded (in other words, if you have not set its default and maximum size to the same), and the allocation request can be accommodated by expanding the pagefile, the pagefile is expanded and the allocation succeeds. (This is where you would see the “system is running low on virtual memory” pop-up. And if you checked it before and after, you’d see that the commit limit is increased.)

If the pagefile cannot be expanded enough to satisfy the request (either because it’s already at its upper size limit, or there is not enough free space on the disk), or if you have no pagefile at all, then the allocation attempt fails. And that’s when you see the “system is out of virtual memory” error. (Changed to simply “out of memory” in Windows 10. Not an improvement, Microsoft!)

The reason for this has to do with the term “commit.” The OS will not allow a process to allocate virtual address space, even though that address space may not all be used for a while (or ever), unless it has a place to keep the contents. Once the allocation has been granted, the OS has committed to make that much storage available.

For private committed address space, if it can’t be in RAM, then it has to be in the pagefile. So the “commit limit” is the size of RAM (minus the bit of RAM that’s occupied by nonpageable code and data) plus the current size of the pagefile. Whereas virtual address space that’s mapped to files automatically comes with a place to be stored, and so is not part of “commit charge” and does not have to be checked against the “commit limit.”

Remember, these “out of memory” errors have nothing to do with how much free RAM you have. Let’s say you have 8 GB RAM and no pagefile, so your commit limit is 8 GB. And suppose your current commit charge is 3 GB. Now a process requests 6 GB of virtual address space. (A lot, but not impossible on a 64-bit system.) 3 GB + 6 GB = 9 GB, over the commit limit, so the request fails and you see the “out of virtual memory” error.

But when you look at the system, everything will look ok! Your commit charge (3 GB) will be well under the limit (8 GB)… because the allocation failed, so it didn’t use up anything. And you can’t tell from the error message how big the attempted allocation was.

Note that the amount of free (or “available”) RAM didn’t enter into the calculation at all.

So for the vast majority of Windows systems, the advice is still the same: don’t remove your pagefile.

If you have one and don’t need it, there is no cost. Having a pagefile will not “encourage” more paging than otherwise; paging is purely a result of how much virtual address space is being referenced vs. how much RAM there is.

If you do need one and don’t have it, applications will fail to allocate the virtual memory they need, and the result (depending on how carefully the apps were written) may well be unexpected process failures and consequent data loss.

Your choice.

What about the rest? Those not in the vast majority? This would apply to systems that are always running a known, unchanging workload, with no changes to the application mix and no significant changes to the data being handled. An embedded system would be a good example. In such systems, if you’re running without a pagefile and you’ve never seen “out of virtual memory” for a long time, you’re unlikely to see it tomorrow. But there’s still no benefit to removing the pagefile.

What questions do you have about Windows memory management? Ask us in the comments! We’ll of course be discussing these and many related issues in our public Windows Internals seminars, coming up in May and July. 

 

13 responses on “The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile?”

  1. Mike BlaszczakJuly 29, 2014 at 7:01 pm

    Stack space is initially reserved then committed as necessary. See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686774%28v=vs.85%29.aspx

    1. Jamie HanrahanPost authorJuly 29, 2014 at 9:19 pm

      Thank you for the comment! That is absolutely correct, and when we talk about VirtualAlloc and committed vs. reserved v.a.s. in our internals seminars (shameless plug!) we do use the user mode stack as an example.

      But for the purposes of this article I chose not to address that, or several other details for that matter; one is always trying to keep articles as short as possible, and I decided that those details would not have made the argument for the conclusion any stronger.

  2. Mike BlaszczakJuly 30, 2014 at 5:44 am

    Thing is, stack space is germane to this discussion. With a page file, stack space can be reserved and not committed. Without a page file, all stack space has to be committed at the start of the thread, whether it is used or not. In that state, creating a thread is a touch more likely to fail; and requires all the stack memory to be committed immediately, whether it is used or not. Lots of threads would mean lots of memory is being committed but never used.

    1. Jamie HanrahanPost authorJuly 30, 2014 at 8:10 am

      Sorry, but no… reserving v.a.s. (for the stack or otherwise) does not require a pagefile, nor does it affect commit charge. A reserved region simply needs a Virtual Address Descriptor that says “this range of Virtual Page Numbers is reserved.” No pagefile space is needed. This is easily demonstrated with testlimit.

  3. BryanDecember 18, 2014 at 1:27 pm

    OK, so in this age of SSD (which is costly, so people size as low as they feel they can get by with), how much freespace, relative to installed RAM, would you recommend people leave available for pagefile and hiberfil?

    For context, I’m getting questions like “If I have 16GB of RAM and I relocate my user profile directory and all data storage to a second drive, can I get away with a 32GB SSD for Windows?”

    1. Jamie HanrahanPost authorDecember 21, 2014 at 2:34 pm

      For the hibernate file, you don’t really have a choice: It needs to be the size of RAM. That’s what the OS will allocate for it if you enable hibernation. If you don’t want that much space taken up by the hibernate file, your only option is to not enable hibernation.

      For the pagefile, my recommendation has long been that your pagefile’s default or initial size should be large enough that the performance counter Paging file | %usage (peak) is kept below 25%. My rationale for this is that the memory manager tries to aggregate pagefile writes into large clusters, the clusters have to be virtually contiguous within the pagefile, and internal space in the pagefile is managed like a heap; having plenty of free space in the page file is the only thing we can do to increase the likelihood of large contiguous runs of blocks being available within the pagefile.

      The above is not a frequently expressed opinion; I should probably expand it to a blog post.

      Re “relative to installed RAM”: sizing the pagefile to 1.5x or 1x the size of RAM is simply what Windows does at installation time. It was never intended to be more than an estimate that would almost always result in a pagefile that’s large enough, with no concern That it might be much larger than it needed to be. Note that the only cost of a pagefile of initial size “much larger than it needs to be” is in the disk (or SSD) space occupied. It was not that long ago that hard drives cost (in $ per GB) about what SSDs do now, so I don’t see that the cost of SSD is a factor.

      I’m not sure how free space on the disk enters into it, except where allowing pagefile expansion is concerned. The above suggestion is for the default or initial size. I see no reason to limit the maximum size at all.

      1. BryanDecember 21, 2014 at 3:01 pm

        I do agree that SSD becomes more affordable every day. Still, I often see people trying to use the least amount of SSD possible. (For context, I help a lot of people in an IRC channel about Windows.) So I’m trying to develop a rule of thumb for them.

        Given what you said, it seems like the answer would be something like this: 1) A default installation of Windows 8.1 will typically use around 14GB of space, but with updates and so on could reasonably grow to 25GB. 2) the hiberfil will be the size of RAM and 3) you should leave at least 1.5x RAM disk space available for pagefile.

        So. If we have 16GB RAM, then allow 1) 25GB for Windows 2) 16GB for hiberfil and 3) 24GB for pagefile. Which means one should set aside at least a 65GB partition for Windows’ C: drive – and this is before thinking about how much space will be needed for applications and data.

        Or to put it another way. If (at default pagefile settings) freespace + hiberfil + pagefile is less than 2.5x amount of RAM in the system, “out of virtual memory” errors are just one memory-hungry application away. The likelihood of this error goes down, the more freespace one leaves on the disk.

        1. Jamie HanrahanPost authorDecember 21, 2014 at 6:15 pm

          To clarify, I was not defending or promoting the “1.5x RAM” idea for pagefile initial size, just explaining it. Windows’ use of it at installation time (it’s actually 1x in later versions) is based on the notion that installed RAM will be approximately scaled to workload: Few people will buy 16 GB RAM for a machine to be used for light Office and web browsing use, and few will install just 2 GB RAM where the workload will include 3d modeling or video editing.

          But my experience is that if you suggest “some factor times size of RAM” as a rule to be followed, you will get pushback: “But with more RAM you should need less pagefile space, not more!” And if the workload is the same, that’s completely true.

          I would also phrase things differently re. leaving disk space “available” for the pagefile. One should set the initial pagefile size to a value that will be large enough. This allocates disk space to the pagefile, it does not leave it “available.” As stated before, my metric for “large enough” is “large enough that no more than 25% pagefile space is used under maximum actual workload”.

          The only way free space on the disk should be involved or considered w.r.t. the pagefile size is in enabling pagefile expansion, i.e. setting the maximum size larger than the initial. Now, if the initial size is large enough, the pagefile will never have to be expanded, so enabling expansion would seem to do nothing. But it provides a zero-cost safety net, which will save you in case you initial size turns out to be not large enough. And of course pagefile expansion is ultimately limited by the free space on the disk.

          1. BryanDecember 22, 2014 at 9:48 am

            Thanks for your thoughts on the matter, Jamie!

            Just to clarify the intent of question a little, our general advice about pagefile settings is to leave them alone. System-managed all the way. Our hope is that this will remove the urge to limit or remove pagefile completely. Your idea of setting an initial size but no maximum is interesting; we’ll consider changing our advice! We do heavily stress that aside from (potential) disk space usage, there’s no downside to allowing pagefile to grow to whatever size it wants to have. As I’m sure you’re aware, this is somehow counterintuitive to quite a few people!

            So, given that and the basic question “how much disk space should I allow for the OS?” I wanted to be able to give a relatively safe rule of thumb for sizing the original OS partition. I’ll still say something like “sure, you can probably get away with less, but the smaller you make it, the more likely you’ll later find yourself in a pickle”.

  4. Todd MartinFebruary 3, 2015 at 1:15 am

    I know more about the craters on the moon than I know about the memory issues on my computer.

    So, hopefully someone out there can help me understand this and maybe suggest a fix.

    I have Windows 7 on my Dell laptop. I have 750gig hard drive. A month or so ago I checked the used space on my hard drive and I had used just shy of 50% of space.

    Now, I am done to less than 50mb! I have no idea where all the memory went. Lately, every time I boot the laptop on I’m getting the message that the system has created a paging file and as I’m on the laptop the error message pops up saying low disc space (it actually just popped up).

    I’ve off-load maybe 5gbs of files only to have the low disc space message pop up an hour later.

    I have not loaded anything new on the laptop (not that I know of) prior to the memory loss.

    I have run multiple virus scan, but they have come up empty.

    It’s difficult to even be on email at this point.

    I don’t know enough to have programed it to have altered its setup that could have led to the vanishing memory.

    The only thing that I have done – as suggested on other blog sites, is to delete old restore points. That didn’t do anything.

    What eat over 300gigs of memory? How do I stop it and how do I get that memory back?

    Any guidance would be greatly appreciated.

    Thank you.

    1. Jamie HanrahanPost authorFebruary 5, 2015 at 5:10 am

      Hi. First, let us say that we sympathize – this sort of thing can be very frustrating.

      This article doesn’t really address hard drive space, but rather virtual address space and physical memory (i.e. RAM). It sounds as if something in your system is furiously writing to your hard drive – other than the creation of the pagefile. The space on the hard drive is not usually thought of as “memory.”

      To track this sort of thing down, my first stop would be Task Manager. Right-click on an empty part of your taskbar and click “Start Task Manager”. Select the “Processes” tab. Then go to the View menu, and click “Select Columns”. Check the box for “I/O Writes”. OK. Oh, and click the “Show processes from all users” button at the bottom. Finally, click on the “I/O Writes” column head so that this column is sorted with the largest value at the top. Unfortunately this shows the total number of writes, not the rate. But it’s a start. If you see one of these ticking up rapidly, that’s a process to look at.

      A better tool might be the “Resource Monitor”, which you can get to from Task Manager’s “Performance” tab. Click the “Resource Monitor” button near the bottom. In Resource Monitor, select the “Disk” tab. In this display you already have columns for read and write rates, in bytes/sec. Click the “Write (B/sec)” column head so that the largest values in this column are at the top. Now, the process at the top might be “System”; if so, that is due to how the Windows file cache works. But the thing to look for is the non-“System” processes that are doing a lot of writes, even when you think your system should be quiet.

      Still in Resource Monitor: If you expand the “Disk Activity” portion of the display you’ll see the I/O rates broken down by file.

      There are some utilities out there, some free, some not, to help you find where all the space is going. The first one that came up in my Google search for “disk space analyzer” is “TreeSize Free”, which gives an Explorer-like display of the tree of directories, but with each annotated with the total size at and below that point. Another is “WinDirStat”, which gives a much more graphical view. This seems to be something a lot of people want help with; the search results show two articles at LifeHacker in the last few years on such software. Try a few of the free ones and see what they tell you.

      Finally, I would not so much look for malware like viruses (malware these days tries pretty hard to avoid notice, and filling up your disk space is something most people notice), but just buggy software. (Of course, malware can be buggy…) I recently traced a similar problem – not filling up the hard drive, but writing to it incessantly, thereby using up the drive’s I/O bandwidth – to the support software for a fancy mouse. Naturally I pulled the plug on that mouse and uninstalled its software. For your case… if the problem has been going on for a month, what have you added to the system in the last month? From Control Panel, you can go to “Uninstall a program”, and the table you’ll see there has clickable column heads for sorting. Sort by installation date and see what’s new.

      Hope this helps! – Jamie Hanrahan

  5. BryanJuly 7, 2015 at 3:10 pm

    Jamie, today I was watching this Channel 9/MVA video about Windows Performance: https://channel9.msdn.com/Series/Windows-Performance/02

    The section on physical and virtual memory, starting around 17:00, strikes me as something you could improve greatly.

    1. Jamie HanrahanPost authorAugust 21, 2015 at 9:32 pm

      Indeed, that section blurred a lot of terms. However I feel it necessary to point out that to really explain “Windows memory management” takes a significant amount of time. There’s no way anyone could do much better in a similar amount of time to what was offered there.


    2. 内存不足综合症,或者:为什么我仍然需要页面文件?

      自 NT 3.1 首次发布以来,Windows 的内存管理(特别是 RAM 和页面文件的使用)一直是人们关注和困惑的主题。可以肯定的是,我们有理由担心。我们担心 RAM,因为我们知道如果内存不足,系统速度就会变慢,并且会将更多内存分页到磁盘。这就是我们担心页面文件的原因。

      (这也是造成混淆的原因。任何现代操作系统中的内存管理都是一个复杂的主题。微软在任务管理器等显示中不断变化的术语选择并没有帮助它。)

      如今,按照以前的标准,RAM 便宜得令人难以置信。任务管理器的显示也变得更好了。现在,“内存”图表确实显示了 RAM 使用情况(在 Vista 和 7 中,他们甚至更清楚地说明了这一点:“物理内存使用情况”),而且人们通常会看到他们的系统显然拥有大量 Windows 所谓的“可用”RAM。(更多内容将在后面的文章中介绍。)因此,总是追求下一次性能提升的用户和管理员想知道(不是第一次)他们是否可以删除那个讨厌的旧页面文件。毕竟,将所有内容保存在 RAM 中必然比分页到磁盘更快,对吗?因此,摆脱页面文件应该会加快速度!正确的?

      如果你猜测我会说“不,那是不对的”,你就没有任何得分。

      您会看到, 消除页面文件并不会消除对磁盘的分页。它甚至可能不会减少磁盘分页量。那是因为页面文件并不是虚拟内存中涉及的唯一文件!到目前为止还没有。

      虚拟内存的类型

      虚拟内存中存在三类“事物”(代码和数据)。Windows 尝试将尽可能多的内容保留在 RAM 中。

      不可分页虚拟内存

      操作系统定义了许多不可分页虚拟内存的用途。如上所述,这不是 Windows“试图保留在 RAM 中”的东西——Windows 别无选择;所有这些都必须始终位于 RAM 中。它们的名称包括“非分页池”、“PFN 数据库”、“在 IRQL 2 或更高级别运行的操作系统和驱动程序代码”,以及必须在不引发页面错误的情况下访问的其他内核模式数据和代码。具有适当特权的应用程序也可以以 AWE 分配的形式创建一些不可分页内存。(我们将有另一篇博客文章解释 AWE。)在大多数系统上,没有太多不可分页内存。

      (“不多”是相对的。当今大多数 Windows 系统上的不可分页内存就比 Windows 2000 时代的总 RAM 大小还要大!)

      您可能想知道如果它永远不能被调出,为什么它被称为“虚拟内存”。答案是虚拟内存不仅仅涉及磁盘和 RAM 之间的分页。“虚拟内存”包括许多其他机制,所有这些机制都适用于此。其中最重要的可能是地址转换:不可分页虚拟内存中事物的物理(RAM)地址与其虚拟地址不同。“虚拟内存”的其他方面,如页面级访问保护、每个进程的地址空间与系统范围的内核模式空间等,都适用于此。因此,这些东西仍然是“虚拟内存”的一部分,并且存在于“虚拟地址空间”中,尽管它始终保存在 RAM 中。

      可分页虚拟内存

      其他两个类别是可分页的,这意味着如果没有足够的 RAM 将所有内容同时保留在 RAM 中,则可以保留或忽略这些类别中的部分内存(通常是最早引用的部分)在磁盘上。当它被访问时,操作系统会自动将其放入 RAM,可能会将其他内容推送到磁盘以腾出空间。这就是分页的本质。顺便说一句,它被称为“分页”,因为它是根据内存“页”完成的,通常只有 4K 字节……尽管大多数分页 I/O 操作会同时移动许多页。

      总的来说,虚拟内存内容不在 RAM 中时保存的位置称为“后备存储”。第二类和第三类虚拟内存的区别有两点:程序如何请求虚拟地址空间,以及后备存储在哪里。

      承诺内存

      其中一类在 Windows 中称为“已提交”内存。或者“私有字节”、“提交字节”或“私有提交”,具体取决于您查看的位置。(在 Windows XP 任务管理器的“性能”选项卡上,它被称为“PF 使用情况”,是“页面文件使用情况”的缩写,这可能是有史以来任何 Windows 显示中最具误导性的术语。)在 Windows 8 和 Windows 10 的任务管理器“详细信息”中选项卡称为“提交大小”。

      无论它叫什么,它都是虚拟内存,a) 是每个进程私有的,b) 页面文件是其后备存储。这就是页面文件的功能:系统在其中保存不能全部保存在 RAM 中的已提交内存部分。

      应用程序可以通过调用 VirtualAlloc、malloc()、new()、HeapAlloc 或许多类似 API 中的任何一个来创建此类内存。它也是用于每个线程的用户模式堆栈的虚拟内存。

      顺便说一句,所有进程中所有已提交内存的总和,以及也由页面文件支持的操作系统定义的数据(最大的此类分配是分页池),称为“提交费用”。(除了在 PerfMon 中,它在“内存”对象下称为“提交字节”。)在 Windows XP 任务管理器显示屏上,“PF 使用情况”图表显示提交费用,而不是页面文件使用情况

      考虑提交费用的一个好方法是,如果页面文件支持的 RAM 中的所有内容都必须写入页面文件,那么这就是它需要多少页面文件空间。

      因此,您可以将其视为最坏的页面文件使用情况。但这种情况几乎从未发生过。大部分提交的内存通常位于 RAM 中,因此提交费用几乎从来都不是任何给定时刻的实际页面文件使用量。

      映射内存

      另一类可分页虚拟内存称为“映射”内存。当进程(应用程序或作为进程运行的任何其他内容)创建此类型的区域时,它会向操作系统指定一个文件,该文件将成为该区域的后备存储。事实上,程序创建这些东西的方法之一是名为 MapViewOfFile 的 API。名称很恰当:文件内容(或子集)被逐字节映射到进程虚拟地址空间的范围内。

      创建映射内存的另一种方法是简单地运行程序。当您运行可执行文件时,该文件不会从头到尾“读取”到 RAM 中。相反,它只是简单地映射到进程的虚拟地址空间。对于 DLL 也是如此。(如果您是一名程序员并且曾经调用过 LoadLibrary,则这不会按照该词的通常含义“加载”DLL;同样,DLL 只是被映射。)然后该文件将成为后备存储 — 实际上,页面文件—用于其映射到的地址空间区域。如果系统上所有映射文件的所有内容无法同时保存在 RAM 中,则剩余部分将保存在各自的映射文件中。

      文件的这种“内存映射”也适用于数据文件访问,通常适用于较大的文件。它是由 Windows 文件缓存自动完成的,该缓存通常用于较小的文件。可以说,正在进行大量的文件映射。

      除了少数例外(例如写入时复制内存部分的修改页面),页面文件不涉及映射文件,仅适用于私有提交的虚拟内存。当执行代码尝试访问当前已调出的映射文件的一部分时,内存管理器只需从映射文件中调入代码或数据即可。如果它被推出内存,它可以被写回它来自的映射文件。如果尚未写入(通常是代码的情况),则不会将其写回到文件中。无论哪种方式,如果再次需要它,都可以从同一个文件中读回。

      典型的 Windows 系统可能在任何给定时间都有数百个活动的此类映射文件,所有这些文件都是它们映射到的虚拟地址空间区域的后备存储。您可以使用 SysInternals Process Explorer 工具查看它们,方法是在上部窗格中选择一个进程,然后切换下部窗格视图以显示 DLL。

      所以…

      现在我们可以明白为什么消除页面文件并不能消除磁盘的分页。它仅消除了页面文件的分页。换句话说,它仅消除了私有提交内存的磁盘分页。所有这些映射文件?它们映射到的所有虚拟内存?如果需要的话,系统仍在与他们进行寻呼。(如果您有足够的 RAM,则不需要。)

      下图以极其简化且不一定按比例的方式显示了虚拟地址空间、RAM 和各种后备存储之间的关系。当然,所有不可分页的虚拟空间都位于 RAM 中。私有提交地址空间的某些部分位于 RAM(“驻留”)中;其余部分位于页面文件中。映射地址空间的某些部分也在 RAM 中;其余部分位于该地址空间映射到的所有文件中。当然,这三个映射文件(一个 .dat、一个 .dll、一个 .exe)代表了典型 Windows 系统中的数百个映射文件。

      平衡问题

      这就是为什么删除页面文件并不能消除分页。(它也不会关闭或以其他方式删除虚拟内存。)但是删除页面文件实际上会使事情变得更糟。原因:您强制系统将所有私有提交地址空间保留在 RAM 中。而且,抱歉,这是一种使用 RAM 的愚蠢方式。

      虚拟内存存在的理由之一是“90-10”规则(或 80-20 规则,或其他规则):程序(以及整个系统)花费大部分时间只访问他们定义的一小部分代码和数据。许多进程启动、初始化,然后基本上闲置相当长一段时间,直到发生一些有趣的事情。虚拟内存允许将它们所占用的 RAM 回收用于其他目的,直到它们醒来并需要它回来(前提是系统 RAM 不足;如果不是,则没有意义)。

      但是在没有页面文件的情况下运行意味着系统无法对提交的内存执行此操作。如果你没有页面文件,那么每个进程中所有私有提交的内存,无论多久前被访问,无论进程空闲了多久,都必须保留在 RAM 中——因为没有其他地方可以保存内容。

      这为映射文件中的代码和数据留下了更少的空间。这意味着映射的内存将比其他情况被更多地分页。映射文件中最近访问的内容可能必须从 RAM 中调出,以便有足够的空间来保存所有私有提交的内容。将此图与上一张图进行比较:

      现在,所有私有提交的 vas 都必须保持驻留,无论多久前访问它,RAM 中用于映射文件内容的空间都会减少。当然,没有页面文件 I/O,但映射文件相应地有更多 I/O。由于已提交内存的旧旧部分不会被访问,因此将其保留在 RAM 中没有任何帮助。可用 RAM“缓冲”的空间也更少。这是净亏损。

      您可能会说“但我现在有足够的 RAM。我什至还有很多空闲内存。不管很久以前引用的私有虚拟内存有多少,它一定不会伤害我。那么为什么我不能在没有页面文件的情况下运行呢?”

      “虚拟内存不足”;“虚拟内存不足”

      好吧,也许你可以。但是使用页面文件还有第二个原因:

      即使您的系统似乎有足够的可用 RAM,没有页面文件也会导致“Windows 虚拟内存不足”错误。

      当进程尝试分配超出系统支持范围的已提交内存时,就会弹出该错误。系统可以支持的数量称为“提交限制”。它是 RAM 大小(减去不可分页内容的部分)加上页面文件当前大小的总和。

      所有进程的私有提交分配一起,加上操作系统中的一些相同的东西(例如分页池),被称为“提交费用”。您可以在此处快速查看 Windows 8 和 10 上的提交费用和提交限制:

      注意:在性能监视器中,这些计数器称为“内存\提交字节”和“内存\提交限制”。每个进程对提交费用的贡献位于 Process\(process)\Private Bytes 中。后者与任务管理器的“进程”选项卡 (Windows 7) 或“详细信息”选项卡(Windows 8 到 10)调用“提交大小”的计数器相同。

      当任何进程尝试分配私有虚拟地址空间时,Windows 会根据提交限制检查所请求分配的大小以及当前提交费用。如果提交限制大于该总和,则分配成功;如果提交限制小于该总和,则无法立即授予分配。但是,如果页面文件可以扩展(换句话说,如果您没有其默认大小和最大大小设置为相同),并且可以通过扩展页面文件来满足分配请求,则页面文件将扩展并且分配成功。(在这里您会看到“系统虚拟内存不足”弹出窗口。如果您在前后检查它,您会发现提交限制增加了。)

      如果页面文件无法扩展到足以满足请求(因为它已经达到其大小上限,或者磁盘上没有足够的可用空间),或者根本没有页面文件,则分配尝试将失败。这时您会看到“系统虚拟内存不足”错误。(在 Windows 10 中更改为简单的“内存不足”。这不是改进,微软!)

      其原因与术语“提交”有关。操作系统将不允许进程分配虚拟地址空间,即使该地址空间可能暂时(或永远)不会全部使用,除非它有地方保存内容。一旦分配被授予,操作系统就 承诺 提供那么多的存储空间。

      对于私有提交的地址空间,如果它不能位于 RAM 中,那么它必须位于页面文件中。因此,“提交限制”是 RAM 的大小(减去不可分页代码和数据占用的 RAM 位)加上页面文件的当前大小。而映射到文件的虚拟地址空间会自动附带一个存储位置,因此不是“提交费用”的一部分,也不必根据“提交限制”进行检查。

      请记住,这些“内存不足”错误与您拥有多少可用RAM无关。假设您有 8 GB RAM 并且没有页面文件,因此您的提交限制为 8 GB。假设您当前的提交费用为 3 GB。现在一个进程请求 6 GB 的虚拟地址空间。(很多,但在 64 位系统上并非不可能。)3 GB + 6 GB = 9 GB,超出了提交限制,因此请求失败,并且您会看到“虚拟内存不足”错误。

      但当你查看系统时,一切看起来都很好!您的提交费用(3 GB)将远低于限制(8 GB)......因为分配失败,所以它没有用完任何东西。并且您无法从错误消息中得知尝试的分配有多大。

      请注意,空闲(或“可用”)RAM 的数量根本不参与计算。

      因此,对于绝大多数 Windows 系统,建议仍然是相同的:不要删除页面文件。

      如果您有一个但不需要,则无需支付任何费用。拥有页面文件不会比其他情况“鼓励”更多的分页;分页纯粹是引用了多少虚拟地址空间与有多少 RAM 的结果。

      如果您确实需要但没有它,应用程序将无法分配它们所需的虚拟内存,结果(取决于应用程序编写的仔细程度)很可能是意外的进程失败和随之而来的数据丢失。

      你的选择。

      剩下的呢?那些人不占绝大多数吗?这适用于始终运行已知的、不变的工作负载的系统,应用程序组合没有变化,处理的数据也没有重大变化。嵌入式系统就是一个很好的例子。在这样的系统中,如果您在没有页面文件的情况下运行,并且您很长一段时间从未见过“虚拟内存不足”,那么您明天就不太可能看到它。但删除页面文件仍然没有任何好处。

      对 Windows 内存管理有什么疑问?在评论中询问我们!当然,我们将在 5 月和 7 月举行的 公开Windows 内部研讨会中讨论这些问题以及许多相关问题。

       

      13 条回复“内存不足综合症,或者:为什么我仍然需要页面文件?

      1. 迈克·布拉扎克2014 年 7 月 29 日晚上 7:01

        堆栈空间最初被保留,然后根据需要提交。请参阅http://msdn.microsoft.com/en-us/library/windows/desktop/ms686774%28v=vs.85%29.aspx

        1. 杰米·汉拉汉帖子作者2014 年 7 月 29 日晚上 9:19

          感谢您的评论!这是绝对正确的,当我们在内部研讨会(无耻的插件!)中谈论 VirtualAlloc 以及承诺与保留 vas 时,我们确实使用用户模式堆栈作为示例。

          但出于本文的目的,我选择不讨论这个问题或与此相关的其他几个细节;人们总是试图使文章尽可能简短,而我认为这些细节不会使结论的论证变得更有力。

      2. 迈克·布拉扎克2014 年 7 月 30 日上午 5:44

        事实是,堆栈空间与此讨论密切相关。使用页面文件,可以保留堆栈空间但不提交。如果没有页面文件,则无论是否使用,所有堆栈空间都必须在线程启动时提交。在这种状态下,创建线程更有可能失败;并要求立即提交所有堆栈内存,无论是否使用。许多线程意味着大量内存被提交但从未使用。

        1. 杰米·汉拉汉帖子作者2014 年 7 月 30 日上午 8:10

          抱歉,但是不……保留 vas(用于堆栈或其他)不需要页面文件,也不影响提交费用。保留区域只需要一个虚拟地址描述符来表示“该范围的虚拟页码已被保留”。不需要页面文件空间。这可以通过 testlimit 轻松证明。

      3. 布莱恩2014 年 12 月 18 日下午 1:27

        好的,那么在这个 SSD 时代(成本高昂,因此人们认为自己可以承受的尺寸越小越好),相对于已安装的 RAM,您会建议人们为页面文件和 hiberfil 保留多少可用空间?

        就上下文而言,我收到诸如“如果我有 16GB RAM 并且我将用户配置文件目录和所有数据存储重新定位到第二个驱动器,我可以使用适用于 Windows 的 32GB SSD 吗?”之类的问题。

        1. 杰米·汉拉汉帖子作者2014 年 12 月 21 日下午 2:34

          对于休眠文件,您实际上没有选择:它需要是 RAM 的大小。如果您启用休眠功能,这就是操作系统将为它分配的资源。如果您不希望休眠文件占用太多空间,则唯一的选择是不启用休眠。

          对于页面文件,我长期以来的建议是页面文件的默认或初始大小应该足够大,以便性能计数器 使用率(峰值)保持在 25% 以下。我这样做的理由是,内存管理器尝试将页面文件写入聚合到大型集群中,集群必须在页面文件中几乎连续,并且页面文件中的内部空间像堆一样进行管理;为了增加页面文件中可用的大量连续块运行的可能性,页面文件中拥有足够的可用空间是我们唯一能做的事情。

          以上不是经常表达的观点;我可能应该将其扩展为一篇博客文章。

          关于“相对于已安装的 RAM”:将页面文件大小调整为 RAM 大小的 1.5 倍或 1 倍正是 Windows 在安装时所做的事情。它的目的只是估计,几乎总是会产生足够大的页面文件,而不用担心它可能比需要的大得多。请注意,初始大小“远大于所需大小”的页面文件的唯一成本是占用的磁盘(或 SSD)空间。不久前,硬盘的成本(每 GB 美元)与现在的 SSD 差不多,所以我不认为 SSD 的成本是一个因素。

          我不确定磁盘上的可用空间如何进入其中,除非涉及允许页面文件扩展。上述建议适用于默认或初始大小。我认为根本没有理由限制最大尺寸。

          1. 布莱恩2014 年 12 月 21 日下午 3:01

            我确实同意 SSD 每天都变得更加便宜。尽管如此,我还是经常看到人们尝试尽可能少地使用 SSD。(就上下文而言,我在有关 Windows 的 IRC 频道中帮助了很多人。)因此,我正在尝试为他们制定一条经验法则。

            根据您所说的,答案似乎是这样的:1) Windows 8.1 的默认安装通常会使用大约 14GB 的空间,但通过更新等可以合理地增长到 25GB。2) hiberfil 将是 RAM 的大小,3) 您应该为页面文件保留至少 1.5 倍的 RAM 磁盘空间。

            所以。如果我们有 16GB RAM,则允许 1) 25GB 用于 Windows 2) 16GB 用于 hiberfil 和 3) 24GB 用于页面文件。这意味着人们应该为 Windows 的 C: 驱动器留出至少 65GB 的分区,这是在考虑应用程序和数据需要多少空间之前的情况。

            或者换句话说。如果(在默认页面文件设置下)空闲空间 + hiberfil + 页面文件小于系统中 RAM 量的 2.5 倍,则“虚拟内存不足”错误仅会出现一个占用大量内存的应用程序。磁盘上留下的可用空间越多,发生此错误的可能性就越低。

            1. 杰米·汉拉汉帖子作者2014 年 12 月 21 日下午 6:15

              需要澄清的是,我并不是在捍卫或提倡页面文件初始大小的“1.5x RAM”想法,只是对其进行解释。Windows 在安装时使用它(在更高版本中实际上是 1 倍)基于安装的 RAM 将根据工作负载大致缩放的概念:很少有人会为计算机购买 16 GB RAM 以用于轻型 Office 和 Web 浏览很少有人会只安装 2 GB RAM,其中工作负载将包括 3D 建模或视频编辑。

              但我的经验是,如果你建议遵循“某些因素乘以 RAM 大小”作为要遵循的规则,你会遭到拒绝:“但是,随着 RAM 的增加,你需要的页面文件空间应该更少,而不是更多!” 如果工作量相同,那就完全正确了。

              我也会以不同的方式重新表述事物。为页面文件留下“可用”磁盘空间。应将初始页面文件大小设置为足够大的值。这会将磁盘空间分配给页面文件,但不会使其“可用”。如前所述,我对“足够大”的衡量标准是“足够大,在最大实际工作负载下使用的页面文件空间不超过 25%”。

              涉及或考虑页面文件大小的磁盘上可用空间的唯一方法是启用页面文件扩展,即将最大大小设置为大于初始大小。现在,如果初始大小足够大,则页面文件将永远不需要扩展,因此启用扩展似乎没有任何作用。但它提供了一个零成本的安全网,如果您的初始大小不够,这可以节省您的时间。当然,页面文件扩展最终受到磁盘上可用空间的限制。

              1. 布莱恩2014 年 12 月 22 日上午 9:48

                感谢您对此事的想法,杰米!

                只是为了澄清一下问题的意图,我们关于页面文件设置的一般建议是不要理会它们。全程系统管理。我们希望这将消除完全限制或删除页面文件的冲动。您设置初始大小但没有最大值的想法很有趣;我们会考虑改变我们的建议!我们确实强调,除了(潜在的)磁盘空间使用之外,允许页面文件增长到它想要的任何大小没有任何缺点。我相信您知道,这对很多人来说有点违反直觉!

                因此,考虑到这一点和基本问题“我应该为操作系统留出多少磁盘空间?” 我希望能够给出一个相对安全的经验法则来调整原始操作系统分区的大小。我仍然会说“当然,你可能可以少做一些,但你做的越小,你就越有可能发现自己陷入困境”。

      4. 托德·马丁2015 年 2 月 3 日凌晨 1:15

        我对月球陨石坑的了解比对电脑内存问题的了解还要多。

        因此,希望有人可以帮助我理解这一点,并可能提出修复建议。

        我的戴尔笔记本电脑上装有 Windows 7。我有750g硬盘。大约一个月前,我检查了硬盘的已用空间,发现已经使用了不到 50% 的空间。

        现在,我已经完成了不到 50mb 的工作!我不知道所有的记忆都去了哪里。最近,每次我启动笔记本电脑时,我都会收到系统已创建分页文件的消息,当我在笔记本电脑上时,会弹出错误消息,提示磁盘空间不足(实际上只是弹出)。

        我卸载了大约 5GB 的文件,一小时后却弹出磁盘空间不足的消息。

        在记忆丧失之前,我没有在笔记本电脑上加载任何新内容(据我所知)。

        我已经运行了多次病毒扫描,但结果都是空的。

        此时甚至连发送电子邮件都变得困难。

        我不知道如何对其进行编程来改变其设置,从而导致记忆消失。

        正如其他博客网站所建议的那样,我所做的唯一一件事就是删除旧的还原点。那没有做任何事情。

        什么吃掉300G以上的内存?我该如何阻止它以及如何恢复记忆?

        任何指导将不胜感激。

        谢谢。

        1. 杰米·汉拉汉帖子作者2015 年 2 月 5 日上午 5:10

          你好。首先,我们要表示同情——这种事情可能会非常令人沮丧。

          本文并不真正讨论硬盘空间,而是讨论虚拟地址空间和物理内存(即 RAM)。听起来好像系统中的某些东西正在疯狂地写入硬盘 - 除了创建页面文件之外。硬盘驱动器上的空间通常不被认为是“内存”。

          为了追踪这类事情,我的第一站是任务管理器。右键单击任务栏的空白部分,然后单击“启动任务管理器”。选择“进程”选项卡。然后转到“视图”菜单,然后单击“选择列”。选中“I/O 写入”复选框。好的。哦,然后单击底部的“显示所有用户的进程”按钮。最后,单击“I/O Writes”列标题,以便该列按最大值排序在顶部。不幸的是,这显示了写入总数,而不是速率。但这是一个开始。如果您看到其中一个快速增长,那么这是一个值得关注的过程。

          更好的工具可能是“资源监视器”,您可以从任务管理器的“性能”选项卡访问它。单击底部附近的“资源监视器”按钮。在资源监视器中,选择“磁盘”选项卡。在此显示中,您已经有读取和写入速率的列(以字节/秒为单位)。单击“写入(B/秒)”列标题,以便该列中的最大值位于顶部。现在,最上面的进程可能是“系统”;如果是这样,那是由于 Windows 文件缓存的工作原理造成的。但要寻找的是正在进行大量写入的非“系统”进程,即使您认为系统应该保持安静。

          仍在资源监视器中:如果展开显示的“磁盘活动”部分,您将看到按文件细分的 I/O 速率。

          有一些实用程序(有些是免费的,有些不是)可以帮助您找到所有空间的去向。我在 Google 搜索“磁盘空间分析器”时出现的第一个是“TreeSize Free”,它提供了类似资源管理器的目录树显示,但每个目录都注释了该点及以下的总大小。另一个是“WinDirStat”,它提供了更加图形化的视图。这似乎是很多人需要帮助的事情;搜索结果显示过去几年 LifeHacker 上有两篇关于此类软件的文章。尝试一些免费的,看看它们会告诉你什么。

          最后,我不会过多地寻找病毒之类的恶意软件(现在的恶意软件很难避免引起注意,而且大多数人都会注意到填满磁盘空间),而只是寻找有缺陷的软件。(当然,恶意软件也可能有缺陷……)我最近发现了一个类似的问题——不是填满硬盘,而是不断地写入,从而耗尽了硬盘的 I/O 带宽——到了一个精美鼠标的支持软件。当然,我拔掉了鼠标的插头并卸载了它的软件。对于您的情况……如果问题已经持续了一个月,您在上个月向系统添加了什么?从控制面板中,您可以转到“卸载程序”,您将看到那里的表格具有可单击的列标题用于排序。按安装日期排序并查看新增内容。

          希望这可以帮助!——杰米·汉拉汉

      5. 布莱恩2015 年 7 月 7 日下午 3:10

        Jamie,今天我正在观看有关 Windows 性能的 Channel 9/MVA 视频:https://channel9.msdn.com/Series/Windows-Performance/02

        关于物理和虚拟内存的部分从 17:00 左右开始,我觉得这是一个可以大大提高的部分。

        1. 杰米·汉拉汉帖子作者2015 年 8 月 21 日晚上 9:32

          事实上,该部分模糊了很多术语。但我觉得有必要指出,要真正解释“Windows内存管理”需要花费大量的时间。任何人都不可能在相似的时间内做得比那里提供的更好。




Understanding Virtual Memory

by Perris Calderon

May, 2004

 

First off, let us get a couple of things out of the way

�        XP is a Virtual Memory Operating system

�        There is nothing you can do to prevent virtual memory in the NT kernel

 

No matter your configuration, with any given amount of ram, you can not reduce the amount of paging by adjusting any user interface in these virtual memory operating systems. You can redirect operating system paging, and you can circumvent virtual memory strategy, but you cannot reduce the amount of paging in the NT family of kernel.

 

To elaborate;

We have to realize that paging is how everything gets brought into memory in the first place! It's quite obvious that anything in memory either came from your disc, or will become part of your disc when your work is done. To quote the Microsoft knowledge base:

 

�Windows NT REQUIRES "backing storage" for EVERYTHING it keeps in RAM. If Windows NT requires more space in RAM, it must be able to swap out code and data to either the paging file or the original executable file.�

 

Here's what actually happens:

Once information is brought into memory (it must be paged in), the operating system will choose for that process the memory reclamation strategy. One form of this memory reclamation (or paging, so to be clear), the kernel can mark to release or unload data without a hard write. The OS will retrieve said information directly from the .exe or the .dll that the information came from if it's referenced again. This is accomplished by simply "unloading" portions of the .dll or .exe, and reloading that portion when needed again. Nice!

 

Note: For the most part, this paging does not take place in the page file, this form of paging takes place within the direct location of the .exe or .the dll

 

The "page file" is another form of paging, and this is what most people are talking about when they refer to the system paging. The page file is there to provide space for whatever portion of virtual memory has been modified since it was initially allocated. In a conversation I had with Mark Russinovich this is stated quite eloquently:

 

�When a process allocates a piece of private virtual memory (memory not backed by an image or data file on disk, which is considered sharable memory), the system charges the allocation against the commit limit. The commit limit is the sum of most of physical memory and all paging files. In the background the system will write these pages out to the paging file if a paging file exists and there is space in the paging file. This is an optimization only.�

 

See this? Modified information cannot have backing store to the original file or .exe's SINCE it's modified*, this is obvious once told isn't it.

 

Different types of things are paged to different files. You can't page "private writable committed" memory to exe or .dll files, and you don't page code to the paging file.*

 

With this understanding we realize:

HAVING A PAGE FILE THAT DOESN'T MATCH THE PHYSICAL MEMORY YOU HAVE IN USE WILL AT TIMES INHIBIT THE PAGING OF PRIVATE WRITABLE VIRTUAL ADDRESS SPACE AND FORCE THE UNNECESSARY UNLOADING OF POSSIBLY RECENTLY ACCESSED .DLLS AND .EXES!

 

You see now, in a situation as such, when memory needs to be reclaimed, you'll be paging and unloading the other things in order to take up the necessary slack you've lost by having a page file smaller then the memory in use, (the private writable pages can no longer be backed if you've taken away it's page file area.)

 

Affect? Stacks, heaps, program global storage, etc will all have to stay in physical memory, NO MATTER HOW LONG AGO ANY OF IT WAS REFERENCED!!! This is very important for any given workload and ANY amount of RAM, since the OS would like to mark memory available when it's not been called for a long time. You have impeded this strategy if you have a page file lower then the amount of ram in use.

 

The hits? More paging or backing of executable code, cache data maps and the like. This even though they were referenced far more recently than for arguments sake, the bottom most pages of a thread's stack. See? These bottom most pages are what we want paged, not .exe's or .dlls that were recently referenced.

 

You thwart this good strategy when there is a smaller amount of page file then there is the amount of memory in use.

 

**All memory seen under the NT family of OS's is virtual memory, (Processes access memory through their virtual memory address space) there is no way to address RAM directly!!

 

And so we see, if memory is in use, it has either come from the hard drive or it will go to the hard drive...THERE MUST BE HARD DRIVE AREA FOR EVERYTHING YOU HAVE IN MEMORY...(self evident, isn't it).

 

Now, that's out of the way, let's go further:

When the operating system needs to claim memory, (because all memory is currently in use, and you are launching new apps, or loading more info into existing work), the OS obviously has to get the necessary ram from somewhere. Something in memory will (must) be unloaded to suit your new work. No one knows what will be unloaded until the time comes as XP will unload the feature that is least likely to come into use again.

 

Memory reclamation in XP even goes further than this to make the process as seamless as possible, using more algorithms than most can appreciate. For instance; there is a "first in first out" (FIFO) policy for pages faults, there is "least recently used" policy, (LRU), and a combination of those with others to determine just what will not be noticed when it's released. Remarkable! There is also a "standby list". When information hasn't been used in a while but nothing is claiming the memory as yet, it becomes available, both written on disc (possibly the page file) and still in memory. Oh, did I forget to say? ALL AT THE SAME TIME ('till the memory is claimed)! Sweat!!! If this information is called before the memory is claimed by a new process it will be brought in without needing anything from the hard drive! This is what's known as a �soft fault", memory available and loaded, also at the ready for new use at the same time!

 

Why so much trouble with today's amount of ram?

You have to realize; most programs are written with the 90/10 rule - they spend 90% of the time bringing 10% of their code or data into use by any given user. The rest of a program can (should) be kept out on disk. This will obviously make available more physical memory to be in use for other more immediate and important needs. You don't keep memory waiting around if it's not likely to be used; you try to have your memory invested in good purpose and function. The unused features of these programs will simply be paged in (usually from the .exe) if they are ever called by the user...HA!!!...no page file used for this paging (unloading and reloading of .exe's and .dlls).

 

To sum everything up:

If you are not short of hard drive space, reducing the size of the page file lower then the default is counter productive, and will in fact impede the memory strategies of XP if you ever do increase your workload and do put your memory under pressure.

Here's why:

�Mapped" addresses are ranges for which the backing store is an exe, .dll, or some data file explicitly mapped by the programmer (for instance the swap file in photo shop).

"Committed" addresses are backed by the paging file.

None, some, or all of the "mapped" and "committed" virtual space might actually still be resident in the process address space. Simply speaking, this means that it's still in RAM and reference able without raising a page fault.

The remainder (ignoring the in-memory page caches, or soft page faults) have obviously got to be on disk somewhere. If it's "mapped" the place on the disc is the .exe, .dll, or whatever the mapped file is. If it's "committed", the place on the disc is the paging file.

 

Why Does The Page File Need To Be Bigger Than The Information Written To It?

 

**Memory allocation in NT is a two-step process--virtual memory addresses are reserved first, and committed second...The reservation process is simply a way NT tells the Memory Manager to reserve a block of virtual memory pages to satisfy other memory requests by the process...There are many cases in which an application will want to reserve a large block of its address space for a particular purpose (keeping data in a contiguous block makes the data easy to manage) but might not want to use all of the space.

 

This is simplest to explain using the following analogy:

If you were to look to any 100% populated apartment building in Manhattan, you would see that at any given time throughout the day, there are less then 25% of the residents in the building at once!

 

Does this mean the apartment building can be 75% smaller?

Of course not, you could do it, but man would that make things tough. For best efficiency, every resident in this building needs their own address. Even those that have never shown up at all need their own address, don't they? We can't assume that they never will show up, and we need to keep space available for everybody.

 

512 residents will need 512 beds...plus they will need room to toss and turn.

For reasons similar to this analogy, you couldn't have various memory sharing their virtual address could you?

 

Now, for users that do not put their memory under pressure, if you are certain you won't be adding additional workload, you will not likely take a hit if you decide to lower the default setting of the page file. For this, if you need the hard drive area, you are welcome to save some space on the drive by decreasing the initial minimum. Mark tells me the rule of thumb to monitor if you have a hard drive issue as follows, "You can see the commit peak in task manager or process explorer. To be safe, size your paging files to double that amount, (expansion enabled)" He continues to say that if a user increases physical memory without increasing your, a smaller page file is an option to save hard drive area Once again, we repeat however, it's necessary to have at least as much page file for the amount of you have in use.

 

Let's move on

 

!!!!!!!!!!!!!!!!!!!! IMPORTANT!!!!!!!!!!!!!!!!!!!!

 

ONCE THE PAGE FILE IS CONTIGUOUS, IT CANNOT BECOME FRAGMENTED ON A HEALTHY DRIVE.

THIS INCLUDES PAGE FILES THAT ARE "DYNAMIC"

 

Any "expert" that has told you the page file becomes fragmented due to "expansion" has an incomplete understanding of what the page file is, what the page file does, and how the page file functions. To make this as simple as possible, here's what actually happens, and exactly how the "fragmented page file" myth got started:

 

First, we need to point out that the page file is a different type of file then most of the files on your computer. The page file is a "container" file. Most files are like bladders that fill with water; they are small, taking no space on the hard drive at all until there is information written, the boundaries of the file will form and change as information is written, the boundaries grow, shrink and expand around and in between the surrounding area and the surrounding files like a balloon or bladder would.

 

The page file is different. The page file is not like a bladder. It's like a can or container. Even if nothing is written to the page file, its physical size and location remain constant and fixed. Other files will form around the page file, even when nothing at all is written to it (once the page file is contiguous).

 

For instance, suppose you have a contiguous page file that has an initial minimum of 256MB. Even if there is absolutely nothing written to that page file, the file will still be 256MB. The 256MB will not move in location on the hard drive and nothing but page file activity will enter the page file area. With no information written to the page file, it is like an empty can, which remains the same size whether it's full or empty.

 

Compare this again to a common file on your hard drive. These files behave more like a bladder then a container. If there is nothing written to a common file, other information will form in proximity. This will affect the final location of these common files, not so with the page file. Once you make the page file contiguous, this extent will remain identical on a healthy drive even if expansion is invoked.

 

Here's how the "fragmented page file" myth due to dynamic page file got started:

Suppose for arguments sake, your computing episode requires more virtual memory then your settings accommodate. The operating system will try to keep you working by expanding the page file. This is good. If this doesn't happen you will freeze, slow down, stall, or crash. Now, it's true, the added portion of the page file in this situation is not going to be near the original extent. You now have a fragmented page file, and this is how that "fragmented page file due to expansion" myth was started. HOWEVER IT IS INCORRECT...simple to see also...the added portion of the page file is eliminated on reboot. The original page file absolutely has to return to the original condition and the original location that it was in when you re-boot. If the page file was contiguous before expansion, it is absolutely contiguous after expansion when you reboot.

 

(blue is data, green is page file)

What a normal page file looks like:

 

 

What an expanded page file looks like:

 

 

What the page file looks like after rebooting:

 

 

 

What Causes the Expansion of a Page File?

Your operating system will seek more virtual memory when the "commit charge" approaches the "commit limit".

 

What does that mean? In the simplest terms this is when your work is asking for more virtual memory (commit charge) than what the OS is prepared to deliver (commit limit).

 

For the technical terms the "commit charge" is the total of the private (non-shared) virtual address space of all of your processes. This will exclude however all the address that's holding code, mapped files, and etcetera.

 

For best performance, you need to make your page file so large that the operating system never needs to expand it, that the commit charge (virtual memory requested) is never larger then the commit limit (virtual memory available). In other words, your virtual memory must be more abundant than the OS will request (soooo obvious, isn't it). This will be known as your initial minimum.

 

Then, for good measure you need to leave expansion available to about three times this initial minimum. Thus the OS will be able to keep you working in case your needs grow, i.e.: you start using some of those very sophisticated programs that get written more and more every day, or you create more user accounts, (user accounts invoke page file for fast user switching), or for whatever, there is no penalty leaving expansion enabled.

 

NOW YOU HAVE THE BEST OF BOTH WORLDS. A page file that is static, because you have made the initial minimum so large the OS will never need to expand it, and, expansion enabled just in case you are wrong in your evaluation of what kind of power user you are or become.

 

USUALLY THE DEFAULT SETTINGS OF XP ACCOMPLISH THIS GOAL. Most users do not need to be concerned or proactive setting their virtual memory. In other words, leave it alone.

 

HOWEVER, SOME USERS NEED TO USE A HIGHER INITIAL MINIMUM THAN THE DEFAULT. These are the users that have experienced an episode where the OS has expanded the page file, or claimed that it is short of virtual memory.

 

USERS THAT ARE NOT SHORT OF HARD DRIVE SPACE SHOULD NEVER LOWER THE DEFAULT SETTINGS OF THE PAGE FILE.

Fact!

 

Different types of things are paged to different files. You can't page "private writable committed" memory to .exe or .dll files, and you don't page code to the paging file.

Jamie Hanrahan of Kernel Mode Systems, The web's "root directory" for Windows NT, Windows 2000 (aka jeh from 2cpu.com) has corrected my statement on this matter with the following caveat:

 

There's one not-unheard-of occasion where code IS paged to the paging file: If you're debugging, you're likely setting breakpoints in code. That's done by overwriting an opcode with an INT 3. Voil�! Code is modified. Code is normally mapped in sections with the "copy on write" attribute, which means that it's nominally read-only and everyone using it shares just one copy in RAM, and if it's dropped from RAM it's paged back in from the exe or .dll - BUT - if someone writes to it, they instantly get their own process-private copy of the modified page, and that page is thenceforth backed by the paging file.

Copy-on-write actually applies to data regions defined in EXEs and .DLLs also. If I'm writing a program and I define some global locations, those are normally copy-on-write. If multiple instances of the program are running, they share those pages until they write to them - from then on they're process-private.

 

 

Credits And Contributions :

 

Perris Calderon

Concept and Creation

 

Eric Vaughan

Editing

 

*Jamie Hanrahan

Kernel Mode Systems (...)

 

**Inside Memory Management, Part 1, Part 2

by Mark Russinovich

了解虚拟内存

作者:佩里斯·卡尔德隆

2004年5月

 

首先,让我们先解决一些问题

� XP        是一个虚拟内存操作系统

        � 您无法 阻止NT 内核中的虚拟内存

 

无论您的配置如何,对于给定数量的 RAM,您都无法通过调整这些虚拟内存操作系统中的任何用户界面来减少分页量。您可以重定向操作系统分页,也可以规避虚拟内存策略,但无法减少 NT 系列内核中的分页量。

 

详细说明;

我们必须意识到,分页是所有内容首先被带入内存的方式!很明显,内存中的任何内容要么来自光盘,要么在工作完成后成为光盘的一部分。引用微软知识库:

 

Windows NT 需要“后备存储”来保存 RAM 中的所有内容。如果 Windows NT 需要更多 RAM 空间,它必须能够将代码和数据交换到分页文件或原始可执行文件。

 

这是实际发生的情况:

一旦信息被带入内存(必须进行分页),操作系统将为该进程选择内存回收策略。这种内存回收(或分页,所以要清楚)的一种形式是,内核可以标记以释放或卸载数据,而无需硬写入。如果再次引用该信息,操作系统将直接从信息来源的 .exe 或 .dll 中检索所述信息。这是通过简单地“卸载”.dll 或.exe 的部分,并在再次需要时重新加载该部分来完成的。好的!

 

注意:在大多数情况下,这种分页不会发生在页面文件中,这种形式的分页发生在 .exe 或 .dll 的直接位置内

 

“页面文件”是分页的另一种形式,这就是大多数人在提到系统分页时所谈论的内容。页面文件为虚拟内存自最初分配以来已修改的任何部分提供空间。在我与 马克·鲁西诺维奇(Mark Russinovich)的一次谈话中,这一点被雄辩地表述为:

 

当进程分配一块私有虚拟内存(磁盘上不受映像或数据文件支持的内存,被视为可共享内存)时,系统会根据提交限制对分配进行收费。提交限制是大部分物理内存和所有页面文件的总和。如果页面文件存在并且页面文件中有空间,系统将在后台将这些页面写入页面文件。这只是一种优化。

 

看到这个了吗?修改后的信息不能备份存储到原始文件或 .exe 中,因为它已被修改*,一旦被告知,这是显而易见的,不是吗?

 

不同类型的事物被分页到不同的文件。您不能将“私有可写提交”内存分页到 exe 或 .dll 文件,也不能将代码分页到分页文件。*

 

有了这样的认识,我们意识到:

如果页面文件与您使用的物理内存不匹配,有时会抑制私有可写虚拟地址空间的分页,并强制卸载可能最近访问的 .DLL 和 .EXES!

 

您现在看到,在这种情况下,当需要回收内存时,您将进行分页并卸载其他内容,以便弥补因页面文件小于正在使用的内存而损失的必要余量,(如果您拿走了私人可写页面的页面文件区域,则无法再支持该页面。)

 

影响?堆栈、堆、程序全局存储等都必须保留在物理内存中,无论它被引用多久了!这对于任何给定的工作负载和任何数量的 RAM 都非常重要,因为操作系统希望在长时间未调用内存时将其标记为可用。如果您的页面文件低于正在使用的内存量,那么您就阻碍了此策略。

 

热门歌曲?更多可执行代码的分页或支持、缓存数据映射等。即使它们被引用的时间比出于争论的原因(线程堆栈的最底部页面)要晚得多。看?这些最底部的页面是我们想要分页的页面,而不是最近引用的 .exe 或 .dll。

 

当页面文件量小于正在使用的内存量时,您就会阻碍这个好策略。

 

**NT系列操作系统下看到的所有内存都是虚拟内存,(进程通过虚拟内存地址空间访问内存)没有办法直接寻址RAM!

 

所以我们看到,如果内存正在使用,它要么来自硬盘驱动器,要么会转到硬盘驱动器...必须有硬盘驱动器区域来容纳内存中的所有内容...(不言而喻,不是不是吧)。

 

现在,这已经不再是问题了,让我们更进一步:

当操作系统需要占用内存时(因为所有内存当前都在使用中,并且您正在启动新应用程序,或者将更多信息加载到现有工作中),操作系统显然必须从某个地方获取必要的内存。内存中的某些内容将(必须)被卸载以适应您的新工作。直到那一天到来之前,没有人知道会卸载什么,因为 XP 将卸载最不可能再次使用的功能。

 

XP 中的内存回收甚至比这更进一步,使用比大多数人所能理解的更多的算法来使过程尽可能无缝。例如; 有针对页错误的“先进先出”(FIFO) 策略,有“最近最少使用”策略 (LRU),以及这些策略与其他策略的组合,以确定在释放时不会注意到哪些内容。卓越!还有一个“备用名单”。当信息有一段时间没有被使用但还没有任何东西占用内存时,它就变得可用,既写在光盘上(可能是页面文件)又仍然在内存中。哦,我忘了说吗?全部同时进行(直到内存被占用)!汗!!!如果在新进程占用内存之前调用此信息,它将被引入,而不需要硬盘驱动器中的任何内容!这就是所谓的“软故障”,内存可用并已加载,同时也准备好供新使用!

 

为什么今天的内存量这么麻烦?

你必须意识到;大多数程序都是按照 90/10 规则编写的 - 它们花费 90% 的时间将 10% 的代码或数据供任何给定用户使用。程序的其余部分可以(应该)保留在磁盘上。显然,这将使更多的物理内存可用于其他更直接和重要的需求。如果内存不太可能被使用,则不会让内存等待;你尝试将你的记忆投入到良好的目的和功能上。如果用户曾经调用过这些程序未使用的功能,则它们将被简单地分页(通常从 .exe 中)...HA!!!...没有用于此分页的页面文件(卸载和重新加载 .exe 的和.dll)。

 

总结一下一切:

如果您不缺少硬盘空间,那么将页面文件的大小减小到低于默认值会适得其反,并且如果您确实增加了工作负载并使内存承受压力,实际上会阻碍 XP 的内存策略。

原因如下:

“映射”地址是后备存储是 exe、.dll 或程序员显式映射的某些数据文件(例如 photoshop 中的交换文件)的范围。

“提交的”地址由分页文件支持。

没有、部分或全部“映射”和“提交”虚拟空间实际上可能仍然驻留在进程地址空间中。简单地说,这意味着它仍然在 RAM 中并且可以引用,而不会引发页面错误。

其余部分(忽略内存中的页面缓存或软页面错误)显然必须位于磁盘上的某个位置。如果它是“映射”的,则光盘上的位置是 .exe、.dll 或任何映射文件。如果是“已提交”,则光盘上的位置就是分页文件。

 

为什么页面文件需要比写入的信息大?

 

** NT 中的内存分配是一个两步过程——首先保留虚拟内存地址,然后提交...保留过程只是 NT 告诉内存管理器保留虚拟内存页块以满足其他虚拟内存页块的一种方式。进程的内存请求...在很多情况下,应用程序希望为特定目的保留一大块地址空间(将数据保留在连续的块中使数据易于管理),但可能不想使用所有空间。

 

使用以下类比可以最简单地解释这一点:

如果您查看曼哈顿任何 100% 人口的公寓楼,您会发现在一天中的任何特定时间,大楼内的居民同时少于 25%!

 

这是否意味着公寓楼可以缩小 75%?

当然不是,你可以做到,但是这会使事情变得困难。为了获得最佳效率,这座大楼中的每个居民都需要自己的地址。即使那些从未出现过的人也需要自己的地址,不是吗?我们不能假设他们永远不会出现,我们需要为每个人保留可用的空间。

 

512 名居民将需要 512 张床位……此外,他们还需要有翻身的空间。

出于与此类比类似的原因,您不能让各种内存共享其虚拟地址,不是吗?

 

现在,对于不会给内存带来压力的用户,如果您确定不会增加额外的工作负载,那么如果您决定降低页面文件的默认设置,您不太可能会受到影响。为此,如果您需要硬盘驱动器区域,欢迎您通过减小初始最小值来节省一些驱动器空间。马克告诉我监控硬盘问题的经验法则如下:“您可以在任务管理器或进程资源管理器中看到提交峰值。为了安全起见,请将分页文件的大小设置为该数量的两倍(启用扩展) “他继续说,如果用户在不增加物理内存的情况下增加物理内存,则可以选择较小的页面文件来节省硬盘空间。但是,我们再次重复一遍,

 

让我们继续

 

!!!!!!!!!!!!!!!!!!!!! 重要的!!!!!!!!!!!!!!!!!!!!

 

一旦页面文件是连续的,它就不会在健康的驱动器上变得碎片化。

这包括“动态”页面文件

 

任何告诉您页面文件由于“扩展”而变得碎片化的“专家”对于页面文件是什么、页面文件的作用以及页面文件如何发挥作用都不完全了解。为了尽可能简单地说明这一点,以下是实际发生的情况,以及“碎片页面文件”神话是如何开始的:

 

首先,我们需要指出,页面文件与计算机上的大多数文件是不同类型的文件。页面文件是一个“容器”文件。大多数锉刀就像装满水的膀胱;它们很小,在写入信息之前根本不占用硬盘驱动器上的空间,文件的边界将随着信息的写入而形成和变化,边界在周围区域和周围区域之间增大、缩小和扩大像气球或膀胱一样的锉刀。

 

页面文件不同。页面文件不像膀胱。它就像一个罐头或容器。即使没有向页面文件写入任何内容,其物理大小和位置也保持不变。即使没有向页面文件写入任何内容(一旦页面文件是连续的),其他文件也会在页面文件周围形成。

 

例如,假设您有一个连续的页面文件,其初始最小值为 256MB。即使该页面文件中完全没有写入任何内容,该文件仍将是 256MB。256MB 不会在硬盘驱动器上的位置移动,并且除了页面文件活动之外不会有任何内容进入页面文件区域。由于没有信息写入页面文件,它就像一个空罐头,无论是满还是空,它都保持相同的大小。

 

再次将其与硬盘驱动器上的常见文件进行比较。这些文件的行为更像是一个膀胱而不是一个容器。如果没有向公共文件写入任何内容,则会在附近形成其他信息。这将影响这些公共文件的最终位置,但页面文件则不然。一旦使页面文件连续,即使调用扩展,该范围在健康的驱动器上也将保持相同。

 

以下是由于动态页面文件而产生的“碎片页面文件”神话是如何开始的:

为了论证起见,假设您的计算片段需要更多的虚拟内存,而不是您的设置所能容纳的内存。操作系统将尝试通过扩展页面文件来让您继续工作。这很好。如果不这样做,您将会冻结、减速、失速或崩溃。现在,确实如此,在这种情况下页面文件的添加部分不会接近原始范围。您现在有了一个碎片化的页面文件,这就是“由于扩展​​而碎片化的页面文件”神话的开始。然而它是不正确的...也很容易看到...页面文件的添加部分在重新启动时被删除。原始页面文件绝对必须返回到重新启动时的原始状态和原始位置。

 

蓝色是数据, 绿色是页面文件)

正常的页面文件是什么样子的:

 

 

展开的页面文件是什么样子的:

 

 

重新启动后页面文件的样子:

 

 

 

页面文件膨胀的原因是什么?

当“提交费用”接近“提交限制”时,您的操作系统将寻求更多虚拟内存。

 

这意味着什么?用最简单的术语来说,这是当您的工作要求的虚拟内存(提交费用)超过操作系统准备提供的(提交限制)时。

 

对于技术术语来说,“提交费用”是所有进程的私有(非共享)虚拟地址空间的总和。然而,这将排除所有保存代码、映射文件等的地址。

 

为了获得最佳性能,您需要使页面文件足够大,以便操作系统永远不需要扩展它,并且提交费用(请求的虚拟内存)永远不会大于提交限制(可用的虚拟内存)。换句话说,您的虚拟内存必须比操作系统请求的更丰富(太明显了,不是吗)。这将被称为您的初始最小值。

 

然后,为了更好地衡量,您需要将可用扩展保留为初始最小值的大约三倍。因此,操作系统将能够在您的需求增长的情况下让您继续工作,即:您开始使用一些非常复杂的程序,这些程序每天都会编写得越来越多,或者您创建更多的用户帐户(用户帐户调用页面文件快速用户切换),或者无论什么,启用扩展不会有任何损失。

 

现在您拥有了两全其美的优势。静态页面文件,因为您已将初始最小值设置得很大,操作系统将永远不需要扩展它,并且启用扩展以防万一您对自己是或将成为哪种高级用户的评估错误。

 

通常 XP 的默认设置可以实现此目标。大多数用户不需要关心或主动设置他们的虚拟内存。换句话说, 别管它

 

但是,某些用户需要使用比默认值更高的初始最小值。这些用户曾经历过操作系统扩展页面文件或声称虚拟内存不足的情况。

 

硬盘空间充足的用户决不应该降低页面文件的默认设置。

事实!

 

不同类型的事物被分页到不同的文件。您不能将“私有可写提交”内存分页到 .exe 或 .dll 文件,并且不能将代码分页到分页文件。

内核模式系统的 Jamie Hanrahan,Windows NT、Windows 2000 的网络“根目录”(又名来自 2cpu.com 的 jeh)更正了我关于此事的声明,并提出以下警告:

 

有一种并非闻所未闻的情况,代码被分页到分页文件:如果您正在调试,您可能会在代码中设置断点。这是通过用 INT 3 覆盖操作码来完成的。瞧!代码已修改。代码通常映射到具有“写入时复制”属性的部分,这意味着它名义上是只读的,并且使用它的每个人仅共享 RAM 中的一份副本,如果它从 RAM 中删除,则会从 exe 或 .dll 分页回来- 但是 - 如果有人写入它,他们会立即获得自己的进程私有的已修改页面的副本,并且该页面从此由分页文件支持。

Copy-on-write actually applies to data regions defined in EXEs and .DLLs also. If I'm writing a program and I define some global locations, those are normally copy-on-write. If multiple instances of the program are running, they share those pages until they write to them - from then on they're process-private.

 

 

Credits And Contributions :

 


 

posted on 2023-09-28 15:42  不及格的程序员-八神  阅读(39)  评论(0编辑  收藏  举报