背景
最近有个项目部署到了AWS,部署方案是ECS+Docker+Java
Launch type | CPU Units | Memory |
---|---|---|
FARGATE | 1024 | 4G |
运行后发现程序表现不符合预期——每当任务繁忙时大量的task会被关闭并启动新的task,关闭原因都是OutOfMemory,甚至连2个线程的并发能力都没有。
Details
---
Status reason | OutOfMemoryError: Container killed due to memory usage
Exit Code | 137
Timeline
找了几个典型的case,首先在AWS上轻而易举地复现此问题,然后把数据移植到本地测试,从jvisualvm中观察JVM heap size却一直十分平稳,没有出现OutOfMemory。由于应用主要承担计算任务并有大量的IO操作,故花了几天时间研究怎么减少IO读写,却一无所获,直到昨天意外发现有段代码输出不符合预期
private static final int MB_UNIT = 1024 * 1024;
public void scheduleTask() {
try {
long freeMemory = Runtime.getRuntime().freeMemory();
LOGGER.info("start batchCalculation usedMemory={}MB freeMemory={}MB", (Runtime.getRuntime().totalMemory() - freeMemory) / MB_UNIT, freeMemory / MB_UNIT);
...
freeMemory = Runtime.getRuntime().freeMemory();
LOGGER.info("finish batchCalculation usedMemory={}MB maxMemory={}MB freeMemory={}MB", (Runtime.getRuntime().totalMemory() - freeMemory) / MB_UNIT, Runtime.getRuntime().maxMemory() / MB_UNIT, freeMemory / MB_UNIT);
} finally {
MDC.clear();
}
}
在AWS跑出的结果
2018-05-30 09:45:00,000 INFO class=c.m.schedule.ScheduledTasks thread=scheduled-task-pool-1 request_id="24da9c0c-e3e5-451f-8b5d-0898c68252cc" service_name=api event_description="start batchCalculation usedMemory=905MB freeMemory=1982MB"
2018-05-30 09:45:10,016 INFO class=c.m.schedule.ScheduledTasks thread=scheduled-task-pool-1 request_id="24da9c0c-e3e5-451f-8b5d-0898c68252cc" service_name=api event_description="finish batchCalculation usedMemory=905MB maxMemory=6651MB freeMemory=1982MB"
其中maxMemory=6651MB明显超过4G。应用使用的JVM参数如下:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1
若上述参数生效,JVM的heap size为容器最大的可用内存(即~4G)。那么可能是JDK版本的问题,为了验证猜想,推送了一个新的image到ECS并运行
ENTRYPOINT exec java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -version
得到结果如下:
VM settings:
Max. Heap Size (Estimated): 6.50G
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-1~deb9u1-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
从上面的结果可知JDK版本(>1.8.0_131)并没有问题。既然JDK版本没有问题,JVM Heap size却不符合预期,那么问题应该是ECS或JVM配置,JVM在扩容时以机器的可用内存(6G)为上限,然而ECS已设置task的内存上限为4G,当任务繁忙时,应用尝试申请超过4G的内存,触发了ECS的内存上限条件导致被关闭。于是尝试使用Xmx/Xms参数限制JVM heap size,修改启动命令并重新推送image和部署
ENTRYPOINT exec java -Xmx3072m -Xms3072m -XshowSettings:vm -jar app.jar
启动后看到VM设置:
VM settings:
Min. Heap Size: 3.00G
Max. Heap Size: 3.00G
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
开启4个线程并发运行20分钟后一切如常,没有OutOfMemory。对比之下,显然是因为-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1
没生效。于是再次加入-XX:+PrintGCDetails -XX:+PrintGCDateStamps
看看gc详情
ENTRYPOINT exec java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XshowSettings:vm -jar app.jar
使用上述配置重新部署,在task因OutOfMemory被关闭后根据GC日志可以看到Heap size并没有超过4G,所以猜测似乎又不成立
2018-06-01T02:55:17.775+0000: [GC (Allocation Failure) [PSYoungGen: 1507554K->87211K(1993216K)] 2639108K->1237620K(3393024K), 0.2491182 secs] [Times: user=0.29 sys=0.01, real=0.24 secs]
2018-06-01T02:55:36.307+0000: [GC (Allocation Failure) [PSYoungGen: 1564843K->182611K(2011136K)] 2715252K->1384684K(3410944K), 0.6166316 secs] [Times: user=0.61 sys=0.02, real=0.61 secs]
结语
仍然没有验证出OutOfMemory的真实原因,但采用Xmx/Xms来控制内存显然是可以解决问题的,后期再跟踪(附后续)。
参考资料
- http://gdocker.com/3676/dockerdc-osjvm.html
- https://github.com/fabric8io-images/java/issues/6
- https://www.questarter.com/q/java-lang-outofmemoryerror-java-heap-space-when-there-is-allot-of-memory-on-the-docker-machine-27_49536404.html
- https://issues.jboss.org/browse/CLOUD-1537?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&showAll=true&_sscc=t
- https://dzone.com/articles/why-my-java-application-is-oomkilled
- https://blog.csanchez.org/2017/05/31/running-a-jvm-in-a-container-without-getting-killed/