Android内存管理(4)*官方教程 含「高效内存的16条策略」 Managing Your App's Memory
Managing Your App's Memory
1.In this document
- How Android Manages Memory
- How Your App Should Manage Memory 「高效内存的16条策略」
- Use services sparingly
- Release memory when your user interface becomes hidden
- Release memory as memory becomes tight
- Check how much memory you should use
- Avoid wasting memory with bitmaps
- Use optimized data containers *
- Be aware of memory overhead
- Be careful with code abstractions
- Use nano protobufs for serialized data *
- Avoid dependency injection frameworks
- Be careful about using external libraries
- Optimize overall performance * 优化code,cup,memory,ui等工具及文章列表
- Use ProGuard to strip out any unneeded code
- Use zipalign on your final APK *
- Analyze your RAM usage *
- Use multiple processes
See Also
Random-access memory (RAM) is a valuable resource in any software development environment, but it's even more valuable on a mobile operating system where physical memory is often j. Although Android's Dalvik virtual machine performs routine garbage collection, this doesn't allow you to ignore when and where your app allocates and releases memory.
In order for the garbage collector to reclaim memory from your app, you need to avoid introducing memory leaks (usually caused by holding onto object references in global members) and release any Reference
objects at the appropriate time (as defined by lifecycle callbacks discussed further below). For most apps, the Dalvik garbage collector takes care of the rest: the system reclaims your memory allocations when the corresponding objects leave the scope of your app's active threads.
This document explains how Android manages app processes and memory allocation, and how you can proactively reduce memory usage while developing for Android. For more information about general practices to clean up your resources when programming in Java, refer to other books or online documentation about managing resource references. If you’re looking for information about how to analyze your app’s memory once you’ve already built it, read Investigating Your RAM Usage.
2.android管理内存方式
How Android Manages Memory
Android does not offer swap space for memory, but it does use paging and memory-mapping (mmapping) to manage memory. This means that any memory you modify—whether by allocating new objects or touching mmapped pages—remains resident in RAM and cannot be paged out. So the only way to completely release memory from your app is to release object references you may be holding, making the memory available to the garbage collector. That is with one exception: any files mmapped in without modification, such as code, can be paged out of RAM if the system wants to use that memory elsewhere.
Android 不支持交换空间,但是支持分页式管理内存。
2.1 共享内存
Sharing Memory
In order to fit everything it needs in RAM, Android tries to share RAM pages across processes. It can do so in the following ways:
- Each app process is forked from an existing process called Zygote. The Zygote process starts when the system boots and loads common framework code and resources (such as activity themes). To start a new app process, the system forks the Zygote process then loads and runs the app's code in the new process. This allows most of the RAM pages allocated for framework code and resources to be shared across all app processes.
用Zygote进程共享通用框架和资源数据,其它应用从Zygote分配进程。
- Most static data is mmapped into a process. This not only allows that same data to be shared between processes but also allows it to be paged out when needed. Example static data include: Dalvik code (by placing it in a pre-linked
.odex
file for direct mmapping), app resources (by designing the resource table to be a structure that can be mmapped and by aligning the zip entries of the APK), and traditional project elements like native code in.so
files.一些静态数据被保存在进程中共享,且可以跟它所在的页挂起。它们包括:Dalvik code数据,应用资源,一些在.so中的本地代码等。
- In many places, Android shares the same dynamic RAM across processes using explicitly allocated shared memory regions (either with ashmem or gralloc). For example, window surfaces use shared memory between the app and screen compositor, and cursor buffers use shared memory between the content provider and client.
Due to the extensive use of shared memory, determining how much memory your app is using requires care. Techniques to properly determine your app's memory use are discussed in Investigating Your RAM Usage.
2.2 Android是如何分配和恢复内存资源的
Allocating and Reclaiming App Memory
Here are some facts about how Android allocates then reclaims memory from your app:
- The Dalvik heap for each process is constrained to a single virtual memory range. This defines the logical heap size, which can grow as it needs to (but only up to a limit that the system defines for each app).
heap大小受限于虚拟内存范围。
- The logical size of the heap is not the same as the amount of physical memory used by the heap. When inspecting your app's heap, Android computes a value called the Proportional Set Size (PSS), which accounts for both dirty and clean pages that are shared with other processes—but only in an amount that's proportional to how many apps share that RAM. This (PSS) total is what the system considers to be your physical memory footprint. For more information about PSS, see the Investigating Your RAM Usage guide.
heap大小一般不是固定的,Android系统根据它和其它进程共享数据的大小以及自身可用大小,按比例设置它。
- The Dalvik heap does not compact the logical size of the heap, meaning that Android does not defragment the heap to close up space. Android can only shrink the logical heap size when there is unused space at the end of the heap. But this doesn't mean the physical memory used by the heap can't shrink. After garbage collection, Dalvik walks the heap and finds unused pages, then returns those pages to the kernel using madvise. So, paired allocations and deallocations of large chunks should result in reclaiming all (or nearly all) the physical memory used. However, reclaiming memory from small allocations can be much less efficient because the page used for a small allocation may still be shared with something else that has not yet been freed.
2.3 Restricting App Memory
To maintain a functional multi-tasking environment, Android sets a hard limit on the heap size for each app. The exact heap size limit varies between devices based on how much RAM the device has available overall. If your app has reached the heap capacity and tries to allocate more memory, it will receive an OutOfMemoryError
.
In some cases, you might want to query the system to determine exactly how much heap space you have available on the current device—for example, to determine how much data is safe to keep in a cache. You can query the system for this figure by calling getMemoryClass()
. This returns an integer indicating the number of megabytes available for your app's heap. This is discussed further below, under Check how much memory you should use.
下面的方法可以得到使用的内存大小:
1 ActivityManager activityManager = (ActivityManager) context.getSystemService(Context.ACTIVITY_SERVICE); 2 activityManager.getMemoryClass();
2.4 当前台应用 到后台时,系统是如何缓存它的
Switching Apps
Instead of using swap space when the user switches between apps, Android keeps processes that are not hosting a foreground ("user visible") app component in a least-recently used (LRU) cache. For example, when the user first launches an app, a process is created for it, but when the user leaves the app, that process does not quit. The system keeps the process cached, so if the user later returns to the app, the process is reused for faster app switching.
If your app has a cached process and it retains memory that it currently does not need, then your app—even while the user is not using it—is constraining the system's overall performance. So, as the system runs low on memory, it may kill processes in the LRU cache beginning with the process least recently used, but also giving some consideration toward which processes are most memory intensive. To keep your process cached as long as possible, follow the advice in the following sections about when to release your references.
当前台应用切换到后台后,系统并不结束它的进程,而是把它缓存起来,供下次启动。当系统内存不足时,按最近最少使用+优先释放内存使用密集的策略释放缓存进程。
More information about how processes are cached while not running in the foreground and how Android decides which ones can be killed is available in the Processes and Threads guide.
3.高效内存的16条策略
How Your App Should Manage Memory
You should consider RAM constraints throughout all phases of development, including during app design (before you begin development). There are many ways you can design and write code that lead to more efficient results, through aggregation of the same techniques applied over and over.
You should apply the following techniques while designing and implementing your app to make it more memory efficient.
3.1 高效内存的策略(1):少用服务组件
Use services sparingly
If your app needs a service to perform work in the background, do not keep it running unless it's actively performing a job. Also be careful to never leak your service by failing to stop it when its work is done.
When you start a service, the system prefers to always keep the process for that service running. This makes the process very expensive because the RAM used by the service can’t be used by anything else or paged out. This reduces the number of cached processes that the system can keep in the LRU cache, making app switching less efficient. It can even lead to thrashing in the system when memory is tight and the system can’t maintain enough processes to host all the services currently running.
如果服务不要求实时响应,就没有必要用它,它是在后台运行的且不会被挂起,会影响LRU策略。多用IntentService,它可以随Intent结束。
The best way to limit the lifespan of your service is to use an IntentService
, which finishes itself as soon as it's done handling the intent that started it. For more information, read Running in a Background Service .
Leaving a service running when it’s not needed is one of the worst memory-management mistakes an Android app can make. So don’t be greedy by keeping a service for your app running. Not only will it increase the risk of your app performing poorly due to RAM constraints, but users will discover such misbehaving apps and uninstall them.
3.2 高效内存的策略(2):当界面消失时释放它的资源
Release memory when your user interface becomes hidden
When the user navigates to a different app and your UI is no longer visible, you should release any resources that are used by only your UI. Releasing UI resources at this time can significantly increase the system's capacity for cached processes, which has a direct impact on the quality of the user experience.
To be notified when the user exits your UI, implement the onTrimMemory()
callback in your Activity
classes. You should use this method to listen for the TRIM_MEMORY_UI_HIDDEN
level, which indicates your UI is now hidden from view and you should free resources that only your UI uses.
Notice that your app receives the onTrimMemory()
callback with TRIM_MEMORY_UI_HIDDEN
only when all the UI components of your app process become hidden from the user. This is distinct from the onStop()
callback, which is called when an Activity
instance becomes hidden, which occurs even when the user moves to another activity in your app. So although you should implement onStop()
to release activity resources such as a network connection or to unregister broadcast receivers, you usually should not release your UI resources until you receive onTrimMemory(TRIM_MEMORY_UI_HIDDEN)
. This ensures that if the user navigates back from another activity in your app, your UI resources are still available to resume the activity quickly.
当用户界面不见时,activity或fragment的内存裁剪回调被调用,在它的 TRIM_MEMORY_UI_HIDDEN 类型事件中释放ui资源,不要在onStop中释放。
3.3 高效内存的策略(3):重写内存裁剪方法,处理相应的内存不足事件。
Release memory as memory becomes tight
During any stage of your app's lifecycle, the onTrimMemory()
callback also tells you when the overall device memory is getting low. You should respond by further releasing resources based on the following memory levels delivered by onTrimMemory()
:
重写内存裁剪方法,会有不同的内存不足回调事件。下面3个是应用正在前台运行时可能收到的事件:
TRIM_MEMORY_RUNNING_MODERATE
Your app is running and not considered killable, but the device is running low on memory and the system is actively killing processes in the LRU cache.
app正在运行且此时不能结束,但是内存少了,系统已经开始清理其它缓存进程了。
TRIM_MEMORY_RUNNING_LOW
Your app is running and not considered killable, but the device is running much lower on memory so you should release unused resources to improve system performance (which directly impacts your app's performance).
app正在运行且此时不能结束,但是内存已经非常少了,这里你应该释放一些不用的资源。
TRIM_MEMORY_RUNNING_CRITICAL
Your app is still running, but the system has already killed most of the processes in the LRU cache, so you should release all non-critical resources now. If the system cannot reclaim sufficient amounts of RAM, it will clear all of the LRU cache and begin killing processes that the system prefers to keep alive, such as those hosting a running service.
app正在运行,但此时系统已经清理了一些其它缓存进程,这里你应清掉不必要的资源。当内存实在不足时,系统可能清理所有缓存进程。包括本进程。
Also, when your app process is currently cached, you may receive one of the following levels from onTrimMemory()
:
下面是应用进程已经缓存起来时可能收到的内存清理事件:
TRIM_MEMORY_BACKGROUND
The system is running low on memory and your process is near the beginning of the LRU list. Although your app process is not at a high risk of being killed, the system may already be killing processes in the LRU cache. You should release resources that are easy to recover so your process will remain in the list and resume quickly when the user returns to your app.
当本进程缓存在置换队列头部附近时,收到此事件,可清理一些容易恢复的资源,远离危险区。
TRIM_MEMORY_MODERATE
The system is running low on memory and your process is near the middle of the LRU list. If the system becomes further constrained for memory, there's a chance your process will be killed.
当本进程缓存在置换队列中部附近时,收到此事件,如果此时内存吃紧,本进程将要被清理。
TRIM_MEMORY_COMPLETE
The system is running low on memory and your process is one of the first to be killed if the system does not recover memory now. You should release everything that's not critical to resuming your app state.
当本进程缓存在置换队列清理区时,应释放所有资源。
Because the onTrimMemory()
callback was added in API level 14, you can use the onLowMemory()
callback as a fallback for older versions, which is roughly equivalent to the TRIM_MEMORY_COMPLETE
event.
Note: When the system begins killing processes in the LRU cache, although it primarily works bottom-up, it does give some consideration to which processes are consuming more memory and will thus provide the system more memory gain if killed. So the less memory you consume while in the LRU list overall, the better your chances are to remain in the list and be able to quickly resume.
android并不是严格按LRU置换,它还是要考虑那些在置换队列中耗内存比较多的。所以经常清理不必要内存可在置换队列中存在较长时间,那么它恢复的也快。
3.4 高效内存的策略(4):只占用适当的内存
Check how much memory you should use
As mentioned earlier, each Android-powered device has a different amount of RAM available to the system and thus provides a different heap limit for each app. You can call getMemoryClass()
to get an estimate of your app's available heap in megabytes. If your app tries to allocate more memory than is available here, it will receive an OutOfMemoryError
.
getMemoryClass() 返回以MB为单位的可用堆大小,如果申请内存超过该值,抛异常。
In very special situations, you can request a larger heap size by setting the largeHeap
attribute to "true" in the manifest <application>
tag. If you do so, you can call getLargeMemoryClass()
to get an estimate of the large heap size.
可在manifest.xml 的<application>中用属性 largeHeap指定应用的堆大小,用 getLargeMemoryClass() 可得到这个值。
However, the ability to request a large heap is intended only for a small set of apps that can justify the need to consume more RAM (such as a large photo editing app). Never request a large heap simply because you've run out of memory and you need a quick fix—you should use it only when you know exactly where all your memory is being allocated and why it must be retained. Yet, even when you're confident your app can justify the large heap, you should avoid requesting it to whatever extent possible. Using the extra memory will increasingly be to the detriment of the overall user experience because garbage collection will take longer and system performance may be slower when task switching or performing other common operations.
不要轻意的在xml中修改堆的大小。
Additionally, the large heap size is not the same on all devices and, when running on devices that have limited RAM, the large heap size may be exactly the same as the regular heap size. So even if you do request the large heap size, you should call getMemoryClass()
to check the regular heap size and strive to always stay below that limit.
3.5 高效内存的策略(5):避免位图浪费
Avoid wasting memory with bitmaps
When you load a bitmap, keep it in RAM only at the resolution you need for the current device's screen, scaling it down if the original bitmap is a higher resolution. Keep in mind that an increase in bitmap resolution results in a corresponding (increase2) in memory needed, because both the X and Y dimensions increase.
只用屏幕分辨率大小的位图,增加图的分辨率是平方级的内存消耗。
Note: On Android 2.3.x (API level 10) and below, bitmap objects always appear as the same size in your app heap regardless of the image resolution (the actual pixel data is stored separately in native memory). This makes it more difficult to debug the bitmap memory allocation because most heap analysis tools do not see the native allocation. However, beginning in Android 3.0 (API level 11), the bitmap pixel data is allocated in your app's Dalvik heap, improving garbage collection and debuggability. So if your app uses bitmaps and you're having trouble discovering why your app is using some memory on an older device, switch to a device running Android 3.0 or higher to debug it.
For more tips about working with bitmaps, read Managing Bitmap Memory.
3.6 高效内存的策略(6)*:使用性能更好的集合数据结构,如少用hashmap用SparseArrray
Use optimized data containers
Take advantage of optimized containers in the Android framework, such as SparseArray
,SparseBooleanArray
, and LongSparseArray
. The generic HashMap
implementation can be quite memory inefficient because it needs a separate entry object for every mapping. Additionally, the SparseArray
classes are more efficient because they avoid the system's need to autobox the key and sometimes value (which creates yet another object or two per entry). And don't be afraid of dropping down to raw arrays when that makes sense.
尽量使用性能好的SparseArray,SparseBooleanArray等集合,少用hashmap等。
3.7 高效内存的策略(7)*:时刻铭记内存开销要小,注意下面这些小细节
Be aware of memory overhead
Be knowledgeable about the cost and overhead of the language and libraries you are using, and keep this information in mind when you design your app, from start to finish. Often, things on the surface that look innocuous may in fact have a large amount of overhead. Examples include:
从设计到开发,从开始到结束,时刻铭记内存开销要小,不要忽略那些小细节,如:
- Enums often require more than twice as much memory as static constants. You should strictly avoid using enums on Android.
枚举比静态常量费更多的内存,在android上避免枚举。
- Every class in Java (including anonymous inner classes) uses about 500 bytes of code.
一个内部类或匿名类要500 bytes
- Every class instance has 12-16 bytes of RAM overhead.
一个类实例要12~16 bytes
- Putting a single entry into a
HashMap
requires the allocation of an additional entry object that takes 32 bytes (see the previous section about optimized data containers).在hashmap中添加一个条目,需要额外增加一个32 bytes的对象
A few bytes here and there quickly add up—app designs that are class- or object-heavy will suffer from this overhead. That can leave you in the difficult position of looking at a heap analysis and realizing your problem is a lot of small objects using up your RAM.
多注意内存分析器的日志
3.8 高效内存的策略(8)*:少用抽象,不要过度设计
Be careful with code abstractions
Often, developers use abstractions simply as a "good programming practice," because abstractions can improve code flexibility and maintenance. However, abstractions come at a significant cost: generally they require a fair amount more code that needs to be executed, requiring more time and more RAM for that code to be mapped into memory. So if your abstractions aren't supplying a significant benefit, you should avoid them.
抽象类或接口 如果没有必要,就还要使用它们。它们需要更多的开销。
3.9 高效内存的策略(9)*:序列化数据用 nano protobuf 少用xml.json等。
Use nano protobufs for serialized data
Protocol buffers are a language-neutral, platform-neutral, extensible mechanism designed by Google for serializing structured data—think XML, but smaller, faster, and simpler. If you decide to use protobufs for your data, you should always use nano protobufs in your client-side code. Regular protobufs generate extremely verbose code, which will cause many kinds of problems in your app: increased RAM use, significant APK size increase, slower execution, and quickly hitting the DEX symbol limit.
在客户端用nono protobuf序列化数据,注意是nano 版本的。 见 protobuf readme。
For more information, see the "Nano version" section in the protobuf readme.
3.10 高效内存的策略(10):避免使用依赖注入模式的框架如Guice等
Avoid dependency injection frameworks
Using a dependency injection framework such as Guice or RoboGuice may be attractive because they can simplify the code you write and provide an adaptive environment that's useful for testing and other configuration changes. However, these frameworks tend to perform a lot of process initialization by scanning your code for annotations, which can require significant amounts of your code to be mapped into RAM even though you don't need it. These mapped pages are allocated into clean memory so Android can drop them, but that won't happen until the pages have been left in memory for a long period of time.
避免使用依赖注入模式的框架如Guice等,它们有优点,但是内存开销也大,如开启了一些你用不到的进程。
3.11 高效内存的策略(11):小心使用第三方库
Be careful about using external libraries
External library code is often not written for mobile environments and can be inefficient when used for work on a mobile client. At the very least, when you decide to use an external library, you should assume you are taking on a significant porting and maintenance burden to optimize the library for mobile. Plan for that work up-front and analyze the library in terms of code size and RAM footprint before deciding to use it at all.
如确定使用第3方库,要找到它们为手机设计的版本。注意开发日志等。
Even libraries supposedly designed for use on Android are potentially dangerous because each library may do things differently. For example, one library may use nano protobufs while another uses micro protobufs. Now you have two different protobuf implementations in your app. This can and will also happen with different implementations of logging, analytics, image loading frameworks, caching, and all kinds of other things you don't expect. ProGuard won't save you here because these will all be lower-level dependencies that are required by the features for which you want the library. This becomes especially problematic when you use an Activity
subclass from a library (which will tend to have wide swaths of dependencies), when libraries use reflection (which is common and means you need to spend a lot of time manually tweaking ProGuard to get it to work), and so on.
即使是为android设计的第3方库也可能带来预期外的危险,如你用了两个3方库,它们用了不同的protobuf。
Also be careful not to fall into the trap of using a shared library for one or two features out of dozens of other things it does; you don't want to pull in a large amount of code and overhead that you don't even use. At the end of the day, if there isn't an existing implementation that is a strong match for what you need to do, it may be best if you create your own implementation.
3.12 高效内存的策略(12)*:优化app性能
Optimize overall performance
A variety of information about optimizing your app's overall performance is available in other documents listed in Best Practices for Performance. Many of these documents include optimizations tips for CPU performance, but many of these tips also help optimize your app's memory use, such as by reducing the number of layout objects required by your UI.
You should also read about optimizing your UI with the layout debugging tools and take advantage of the optimization suggestions provided by the lint tool.
优化文章列表: Best Practices for Performance
优化ui : optimizing your UI
优化代码 : lint tool
3.13 高效内存的策略(13):用proguard去掉无用代码
Use ProGuard to strip out any unneeded code
The ProGuard tool shrinks, optimizes, and obfuscates your code by removing unused code and renaming classes, fields, and methods with semantically obscure names. Using ProGuard can make your code more compact, requiring fewer RAM pages to be mapped.
用proguard去掉无用代码
3.14 高效内存的策略(14)*:使用 zipalign优化apk
Use zipalign on your final APK
If you do any post-processing of an APK generated by a build system (including signing it with your final production certificate), then you must run zipalign on it to have it re-aligned. Failing to do so can cause your app to require significantly more RAM, because things like resources can no longer be mmapped from the APK.
后期处理apk时要用zipalign工具,google play不支持没用它的apk上线。 zipalign
Note: Google Play Store does not accept APK files that are not zipaligned.
3.15 高效内存的策略(15)*:分析内存使用情况是个好习惯
Analyze your RAM usage
Once you achieve a relatively stable build, begin analyzing how much RAM your app is using throughout all stages of its lifecycle. For information about how to analyze your app, read Investigating Your RAM Usage.
当应用开发到相对稳定时,开始分析内存使用情况是个好习惯。Investigating Your RAM Usage
3.16 高效内存的策略(16):适当使用多进程
Use multiple processes
If it's appropriate for your app, an advanced technique that may help you manage your app's memory is dividing components of your app into multiple processes. This technique must always be used carefully and most apps should not run multiple processes, as it can easily increase—rather than decrease—your RAM footprint if done incorrectly. It is primarily useful to apps that may run significant work in the background as well as the foreground and can manage those operations separately.
An example of when multiple processes may be appropriate is when building a music player that plays music from a service for long period of time. If the entire app runs in one process, then many of the allocations performed for its activity UI must be kept around as long as it is playing music, even if the user is currently in another app and the service is controlling the playback. An app like this may be split into two process: one for its UI, and the other for the work that continues running in the background service.
适当使用多进程可提高性能,manifest.xml中可用 android:process 属性把一个组件定义在一个独立进程中运行。
You can specify a separate process for each app component by declaring the android:process
attribute for each component in the manifest file. For example, you can specify that your service should run in a process separate from your app's main process by declaring a new process named "background" (but you can name the process anything you like):
1 <service android:name=".PlaybackService" 2 android:process=":background" />
Your process name should begin with a colon (':') to ensure that the process remains private to your app.
进程名以:开头表示是个进程被app私有。
Before you decide to create a new process, you need to understand the memory implications. To illustrate the consequences of each process, consider that an empty process doing basically nothing has an extra memory footprint of about 1.4MB, as shown by the memory information dump below.
内存信息查看示例: 注意这个命令: $adb shell dumpsys meminfo com.example.android.apis:empty
adb shell dumpsys meminfo com.example.android.apis:empty ** MEMINFO in pid 10172 [com.example.android.apis:empty] ** Pss Pss Shared Private Shared Private Heap Heap Heap Total Clean Dirty Dirty Clean Clean Size Alloc Free ------ ------ ------ ------ ------ ------ ------ ------ ------ Native Heap 0 0 0 0 0 0 1864 1800 63 Dalvik Heap 764 0 5228 316 0 0 5584 5499 85 Dalvik Other 619 0 3784 448 0 0 Stack 28 0 8 28 0 0 Other dev 4 0 12 0 0 4 .so mmap 287 0 2840 212 972 0 .apk mmap 54 0 0 0 136 0 .dex mmap 250 148 0 0 3704 148 Other mmap 8 0 8 8 20 0 Unknown 403 0 600 380 0 0 TOTAL 2417 148 12480 1392 4832 152 7448 7299 148
Note: More information about how to read this output is provided in Investigating Your RAM Usage. The key data here is the Private Dirty and Private Clean memory, which shows that this process is using almost 1.4MB of non-pageable RAM (distributed across the Dalvik heap, native allocations, book-keeping, and library-loading), and another 150K of RAM for code that has been mapped in to execute.
This memory footprint for an empty process is fairly significant and it can quickly grow as you start doing work in that process. For example, here is the memory use of a process that is created only to show an activity with some text in it:
下面是一个显示文本的activity的内存信息示例:
** MEMINFO in pid 10226 [com.example.android.helloactivity] ** Pss Pss Shared Private Shared Private Heap Heap Heap Total Clean Dirty Dirty Clean Clean Size Alloc Free ------ ------ ------ ------ ------ ------ ------ ------ ------ Native Heap 0 0 0 0 0 0 3000 2951 48 Dalvik Heap 1074 0 4928 776 0 0 5744 5658 86 Dalvik Other 802 0 3612 664 0 0 Stack 28 0 8 28 0 0 Ashmem 6 0 16 0 0 0 Other dev 108 0 24 104 0 4 .so mmap 2166 0 2824 1828 3756 0 .apk mmap 48 0 0 0 632 0 .ttf mmap 3 0 0 0 24 0 .dex mmap 292 4 0 0 5672 4 Other mmap 10 0 8 8 68 0 Unknown 632 0 412 624 0 0 TOTAL 5169 4 11832 4032 10152 8 8744 8609 134
The process has now almost tripled in size, to 4MB, simply by showing some text in the UI. This leads to an important conclusion: If you are going to split your app into multiple processes, only one process should be responsible for UI. Other processes should avoid any UI, as this will quickly increase the RAM required by the process (especially once you start loading bitmap assets and other resources). It may then be hard or impossible to reduce the memory usage once the UI is drawn.
通常要设计一个ui进程和其它工作进程。
Additionally, when running more than one process, it's more important than ever that you keep your code as lean as possible, because any unnecessary RAM overhead for common implementations are now replicated in each process. For example, if you are using enums (though you should not use enums), all of the RAM needed to create and initialize those constants is duplicated in each process, and any abstractions you have with adapters and temporaries or other overhead will likewise be replicated.
Another concern with multiple processes is the dependencies that exist between them. For example, if your app has a content provider that you have running in the default process which also hosts your UI, then code in a background process that uses that content provider will also require that your UI process remain in RAM. If your goal is to have a background process that can run independently of a heavy-weight UI process, it can't have dependencies on content providers or services that execute in the UI process.
多进程时,它们之间依赖可能是个问题,后台独立运行的进程不要依赖ui进程里的service或provider.