My firefox suddenly become sluggish and then froze. I opened Process Explorer to see what's going on and noticed the main thread of firefox.exe was stuck in the kernal function NtAllocateVirtualMemory
. At that time, the process was only using 1.5GB of virtual memory space and I had more than 1GB of commit limit free and at least 1GB of RAM free. I thought the memory space of Firefox might have become too fragmented, so I killed it.
Then I got a surprise graph like below.
As you can see, there has always been RAM free during the entire period but I seemed to have hit commit limit anyhow. The page files are set to system managed and the system drive have more than 17GB free, so I have no idea how I could hit the limit.... Any thoughts on this?
System is Windows 10 build 10586. I have 8GB of RAM.
(It seems Firefox or some related thing has claimed a hidden 3-4 GB of virtual memory space. I think it could be the display driver, but why did the system not expand the page file?)
-
Are you running a 32-bit version of Firefox?– spherical_dogJan 16, 2016 at 0:06
-
Yes, it is 32 bit, but I can normally go up to 1.9GB of virtual memory use with no problem what so ever.– billc.cnJan 16, 2016 at 0:09
-
Well, the virtual memory limit for a single 32-bit process is 2GB if it is not large address space aware. 32-bit Firefox is supposed to be address space aware, which doesn't explain the crash, but you might want to test the 64-bit version and see if it crashes.– spherical_dogJan 16, 2016 at 0:13
1 Answer
Physical memory and commit limit are distinct resources. You can run out of one even though you have plenty of the other left. You most likely need a larger page file to raise the commit limit.
Physical memory is very much like cash in the bank. Commit limit is very much like checks that you've already written. Even if you have lots of cash in the bank, if you've written a lot of checks, you may be unable to write more checks.
Say you have a system with 3GB of free RAM and no page file. And say an application asks for 2GB of memory. The system will say "yes" and raise the commit limit by 2GB. The system still has 3GB of free RAM, because the application hasn't used any yet. But if another application requests 2GB of memory, the OS will have to refuse. It has 3GB in the bank but has written a check for 2GB, so it can't write another check for 2GB.
-
Okay, this I don't get (even though all the evidences point to it). In my naive computer science-y understanding, the commit limit should be page-able RAM + page file. How could it not be able to use all the RAM? Any pointer to any reading I can do? Also, does not explain why the OS failed to expand the page file as it is set to system managed.– billc.cnJan 16, 2016 at 0:20
-
I get the argument in your example, but at that point I killed Firefox, it can at most request 500MB more memory before it runs out of process address space. However, the system was 1GB below the commit limit. (I've realised the commit graph has a scaling issue due to the limit nearly halved after FF was killed.)– billc.cnJan 16, 2016 at 0:33
-
@billc.cn Taking your questions in order: It can use all the RAM (for example, to fulfill existing obligations), it just can't reserve any more memory, so requests to reserve memory fail. The OS didn't expand the page file because memory was reserved but not used, the page file won't be expanded until it's used. Likely, at least part of your issue was also that the 32-bit process' virtual address space was fragmented. Jan 17, 2016 at 0:22
-
@billc.cn The point that I think is not clear is that even though total RAM is part of the commit limit, RAM is not marked as "used" just because virtual memory has been committed. If I commit 1 GB, that uses 1 GB of the system commit limit, but it doesn't actually use any RAM at all until I store something in that region. And then it only uses as much as needed to store what I've written. After a while it may use less than that, if I don't reference it much and the OS decides to move some of it to the pagefile to make it more available for some other, higher-activity process. Apr 7, 2018 at 18:22
我的火狐突然变得迟钝,然后就死机了。我打开 Process Explorer 看看发生了什么,发现 firefox.exe 的主线程卡在了 kernal function 中NtAllocateVirtualMemory
。当时,该进程仅使用 1.5GB 的虚拟内存空间,并且我有超过 1GB 的可用提交限制和至少 1GB 的可用 RAM。我认为Firefox的内存空间可能变得过于碎片化,所以我把它杀掉了。
然后我得到了一张令人惊讶的图表,如下所示。
正如你所看到的,在整个期间内存一直是空闲的,但我似乎已经达到了提交限制。页面文件设置为系统管理,系统驱动器有超过 17GB 的可用空间,所以我不知道如何达到限制......对此有什么想法吗?
系统是Windows 10 build 10586。我有8GB RAM。
(好像Firefox或其他相关的东西声称隐藏了3-4GB的虚拟内存空间。我认为这可能是显示驱动程序,但为什么系统没有扩展页面文件?)
-
-
-
如果单个 32 位进程不支持大地址空间,则其虚拟内存限制为 2GB。32 位 Firefox 应该具有地址空间感知能力,这并不能解释崩溃的原因,但您可能想测试 64 位版本并查看它是否崩溃。– 球形狗2016 年 1 月 16 日 0:13
1 个回答
物理内存和提交限制是不同的资源。即使你还有很多剩余的,你也可能会用完其中之一。您很可能需要更大的页面文件来提高提交限制。
物理内存很像银行里的现金。提交限制非常类似于您已经编写的检查。即使你在银行里有很多现金,如果你写了很多支票,你也可能无法写更多的支票。
假设您的系统有 3GB 可用 RAM 并且没有页面文件。假设某个应用程序需要 2GB 内存。系统会说“是”并将提交限制提高 2GB。系统仍然有 3GB 的可用 RAM,因为应用程序尚未使用任何 RAM。但如果另一个应用程序请求 2GB 内存,操作系统将不得不拒绝。它在银行中有 3GB,但已经写了一张 2GB 的支票,因此它无法再写一张 2GB 的支票。
-
好吧,我不明白这一点(尽管所有证据都表明这一点)。根据我天真的计算机科学理解,提交限制应该是可分页 RAM + 页面文件。怎么可能无法使用所有 RAM?有什么可以指导我阅读的内容吗?另外,没有解释为什么操作系统无法扩展页面文件,因为它设置为系统管理。– 账单网2016 年 1 月 16 日 0:20
-
我得到了你的例子中的论点,但那时我杀死了 Firefox,它在耗尽进程地址空间之前最多可以请求 500MB 的内存。但是,系统比提交限制低 1GB。(我意识到提交图存在缩放问题,因为 FF 被杀死后限制几乎减半。)– 账单网2016 年 1 月 16 日 0:33
-
@billc.cn 按顺序回答您的问题:它可以使用所有 RAM(例如,履行现有义务),它只是无法保留更多内存,因此保留内存的请求失败。操作系统没有扩展页面文件,因为内存被保留但没有使用,页面文件在使用之前不会被扩展。可能,您的问题的至少一部分还在于 32 位进程的虚拟地址空间是碎片化的。– 大卫·施瓦茨2016 年 1 月 17 日 0:22
-
@billc.cn 我认为不清楚的一点是,即使总 RAM 是提交限制的一部分,RAM 也不会仅仅因为虚拟内存已提交而被标记为“已使用”。如果我提交 1 GB,则会使用 1 GB 的系统提交限制,但在我在该区域存储某些内容之前,它实际上根本不使用任何 RAM。然后它只使用存储我所写内容所需的量。一段时间后,如果我不太多引用它,并且操作系统决定将其中一些内容移动到页面文件,以使其更可用于其他一些活动较高的进程,那么它可能会使用更少的内容。– 杰米·汉拉汉2018 年 4 月 7 日 18:22
南来地,北往的,上班的,下岗的,走过路过不要错过!
======================个性签名=====================
之前认为Apple 的iOS 设计的要比 Android 稳定,我错了吗?
下载的许多客户端程序/游戏程序,经常会Crash,是程序写的不好(内存泄漏?刚启动也会吗?)还是iOS本身的不稳定!!!
如果在Android手机中可以简单联接到ddms,就可以查看系统log,很容易看到程序为什么出错,在iPhone中如何得知呢?试试Organizer吧,分析一下Device logs,也许有用.