Memory optimization

Memory optimization

Memory optimization

Memory management mechanism

The Android application runs on the Android virtual machine, and the memory allocation and garbage collection mechanism of the application are all completed by the virtual machine.

In the Android system, memory management uses the new keyword to allocate memory for objects, and the release of memory is done by the garbage collector (GC). There is no need to explicitly manage memory during the development process.

In the Android system, the virtual machine has two operating modes: Dalvik and ART (to be furthered)

Generation Heap Memory

In the Android advanced system version, there is a Generation Heap Memory model for Heap.

In the Generation Heap Memory model, the most recently allocated objects are stored in the Young Generation area. The objects trigger GC (Garbage Collection) at a certain time. Objects that are not recycled will be moved to Old Generation according to different rules. Accumulate a certain amount of time before moving to the Permanent Generation area. The system will execute different GC operations according to different memory data in the memory.

GC determines whether to reclaim the object by whether the object is referenced by the surviving object, and then dynamically reclaims the object without any reference to release the memory space.

Young Generation

YG is divided into 3 areas, one is smaller than the Eden area, and two Survivor areas with smaller memory, the ratio is 8:1:1. The Eden area stores new objects, and the Survivor area is used to store objects that survive each garbage collection.

The Eden (Eden) area is used to store newly created objects. Objects that survive a GC will be copied to the S0 area of the Survivor area. Objects that survive another GC will be copied to the S1 area of the Survivor area. , The subject will be repeatedly pounded like this 15 times before entering the Old Generation.

Old Generation

The old generation is used to store objects copied from the young generation. Generally, the life cycle of the objects of the old generation is relatively long.

Large objects (Java objects that require contiguous memory space, typically very long strings and arrays) enter the old generation directly. The advantage of directly entering the old age is to avoid a large amount of memory copy when the copy algorithm between the Eden area and the two Survivor areas is executed.

Permanent Generation

Refers to the permanent storage area of the memory, this area does not belong to the Java heap memory range. Class will be placed here when it is Loader. If the Java application is large, for example, there are many classes, then it is recommended to increase the size of this area to meet the memory requirements for loading these classes

Used to store static classes and methods, the persistent generation has no significant effect on memory recovery.

Android allocation and recycling :

The Android system does not defragment the free memory area in Heap.

The system will only determine whether the remaining space at the end of the Heap is sufficient before the new memory is allocated. If the space is not enough, it will trigger the gc operation to free up more free memory space .


The significance of optimizing memory

On Android (virtual machine), the application and release of memory is really the responsibility of the system layer . The memory of the object is automatically allocated. Then the virtual machine tracks each memory object. Once it decides which object to release, it will release the memory to the memory heap. The process does not require the intervention of developers. This mechanism has a garbage collector in the Android system, which is GC.

During the GC process, any thread will be tentatively determined, including the UI thread, and the speed of releasing memory in different areas is different. The original thread can only be executed after the GC is completed.

But what if a lot of repetitions? ? ? , The time for processing other things will be compressed, which will affect the rendering work. When a large amount of memory is applied for in a short period of time, and it is rarely effectively released, it will cause a memory leak. Once the remaining memory reaches the threshold, garbage collection activities will be initiated. In the GC, frequent work constitutes a summary that consumes a lot of time, and may cause lag.

Conclusion : Frequent GC will aggravate the lag situation and affect fluency. Therefore, try to reduce GC behavior, improve the fluency of the application, and reduce the probability of lag.

If the peak of the memory space reaches the threshold of the memory space, or this memory peak (burr phenomenon) occurs frequently, a large amount of memory space needs to be applied for at this peak, and OOM will be caused due to insufficient heap memory space.

In general , the main reason for OOM is that the requested memory cannot meet the memory required by the application this time (insufficient memory).

Avoid memory leaks

Definition of leak

A Java object has its own life cycle. When the object is no longer used, it should be garbage collected, but for some reason, the object is not referenced or recycled, and still exists in memory, which means the object It has leaked. (Useless but not dead)

Common scenarios

1. The resource object is not closed

Resource objects (such as Cursor, File files, etc.) often use some caches, and they should be closed when not in use.

Their cache not only exists in the Java virtual machine, but also exists outside the Java virtual machine. If you just set the object references to null without closing them, it will often cause memory leaks.

todo missing examples

2. The registered object is not cancelled

If the registered object is not unregistered after registration, it will cause the reference of this object to be maintained in the observer list, preventing garbage collection, which generally occurs when registering broadcast receivers, registering observers, and so on.

todo: Observer pattern needs to be understood

Example: Suppose an Activity wants to monitor the telephone service, obtain information such as signal strength, define a Listener object in Act, and register it in the TelephoneManager service. In theory, the Act object is released after Act exits.

However, when you release the Act object, forget to cancel the previously registered Listener object, which will cause the Act object to be unable to be recycled. If you continue to enter the Act, you will eventually hesitate that a large number of Act objects cannot be recycled, causing frequent GC and even OOM.

3. The static variables of the class hold big data objects

Static variables hold references to objects for a long time to prevent garbage collection. If static variables hold large data objects, such as Bitmaps, it is easy to cause problems such as insufficient memory.

4. Static instance of non-static inner class ???

A non-static inner class will hold a reference to an outer class. If an instance of a non-static inner class is static, it will indirectly hold a reference to the outer class for a long time to prevent garbage collection.

public class MainActivity extends AppCompatActivity{
  TestModule mTestModule = null;
  
	void onCreat(Bundle savedInstanceState){
    setContentView();
    if(null == mTstModule){
      mTsetModule = new TestModule(this);
    }
  }
  class TestModule{
    private Context mContext = null;
    public TestModule(Context ctx){
      mContext = ctx;
    }
  }
}
 

5.Handler temporary memory leak

mHandler is an instance of the non-static anonymous inner class of Handler, so it holds a reference to the external class Active, and the message queue is continuously polling and processing messages in a Looper thread, then there will be a situation, when Act exits, There are still unprocessed messages or messages being processed in the message queue, and the Message in the message queue holds a reference to the mHandler instance, and mHander holds a reference to Act, so the memory resources of Act cannot be recovered, causing a leak.

To avoid memory leaks, you need to modify two places:

  • Use the static Handler inner class to let the objects held by the back-heap Handler use weak references, so that when reclaiming, the objects held by the Handler can also be reclaimed.
  • In the Destroy or Stop of the Activity, the messages in the message queue should be cancelled to avoid pending messages in the message queue of the Looper thread to be processed.
public class HanderFixActivity extends AppCompatActivity {
    private MyHandler myHandler = new MyHandler(this);

    @Override
    protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
    }

    /**
     * Handler handleMessage 
     */
    private static class MyHandler extends Handler {
        private WeakReference<Context> mContext = null;
        public MyHandler(Context context) {
            mContext = new WeakReference<Context>(context);//
        }

        @Override
        public void handleMessage(@NonNull Message msg) {
            super.handleMessage(msg);
        }
    }

    /**
     *  
     */
    private void doGetDataAsyncTask() {
        Message message = Message.obtain();
        message.what = 1;
        message.obj = " ";
        myHandler.sendMessage(message);
    }

    @Override
    protected void onDestroy() {
        myHandler.removeCallbacksAndMessages(null);
        super.onDestroy();
    }
}
 

6. Memory leaks caused by objects in the container not being cleaned up

Usually a reference to an object is added to the collection. When the object is not needed, if the reference is not removed from the collection, the collection will grow larger and larger. If this collection is static, the situation will be worse.

7.WebView

WebView in Android not only has a big compatibility problem, but also differences in WebView of different Android versions, and there are also differences in WebView in custom ROMs of different manufacturers. What's more serious is that WebView has a leak problem, and if it is used once in the application, the memory will not be released.

The usual solution is to start a separate process for WebView and use AIDL to communicate with the application. The process where WebView is located can choose the timing of destruction according to business needs to achieve the purpose of normal memory release.


Optimize memory space

No memory leak does not mean that there is no need to optimize the memory. On mobile devices, due to the limited storage space, the smallest memory object or less resource overhead is used, and the GC can reclaim resources more efficiently and maintain stable and efficient It is best to run.

(Small memory, more work)

1. Object references

Java1.2 began to introduce three object reference methods:

Soft reference (SoftReference), weak reference (WeakReference), strong reference (PhantomReference).

If the reference object type is not specified, the default is a strong reference.

  • Strong citation

    Strong citations are the use of universal citations. If the object is a strong reference, the GC will never reclaim it. When the memory is insufficient, the Java virtual machine throws an OM error and will not reclaim the strongly referenced objects to solve the problem of insufficient memory. Therefore, if it is a strongly referenced object, remember to release it or turn it into a weak reference in the life cycle if it is no longer needed for recycling.

  • Soft reference

    While keeping the referenced object, soft references ensure that all soft references are cleared before the virtual machine reports insufficient memory.

    The key point is that GC may (or may not) release weakly referenced objects at runtime. Whether the object is released depends on the GC algorithm and the amount of memory available when the GC is running. If the object is a soft reference, the memory space is sufficient, and it will not be reclaimed during the GC. If the memory is insufficient, the memory of these objects will be reclaimed, and there is reclaim inside. This object can be used by the program. Soft references can be used to implement memory-sensitive caches.

  • Weak references A typical use of weak reference classes is canonicalized mapping). For objects that have a long life cycle and are not expensive to create, weak references are more useful. If the garbage collector encounters a weak reference object while it is running, it will release the WeakReference reference object.

    The difference between weak reference and soft reference is that only objects with weak references have a shorter life cycle. In the process of the garbage collector thread scanning the memory area under its jurisdiction, once an object with only weak references is found, its memory will be reclaimed regardless of whether the current memory space is sufficient. However, because the garbage collector is a low-priority thread, it may soon find objects that only have weak references.

  • Phantom references Phantom references can only be used to track upcoming? ? ?

2. Reduce unnecessary memory overhead

  • AutoBoxing

    boolean(8bits)

    int(32bits)

    float(32bits)

    long(64bits)

    Complex data types (described by a class), such as Boolean, Integer (16 bytes), Float, etc.

    Integer num = 0;//16 
    for(int i =0;i < 100;i++){//int 4 
    	num +=i;
    }
     
  • Memory reuse

In Android, some resources can be reused, and the system will also provide corresponding interfaces or methods.

  • 1. Effective use of the system's own resources

    The system has a lot of built-in resources, such as colors, strings, commonly used icons, and some animations and page styles and simple layouts, which can be used directly.

    Benefits: Direct use of the system's own resources can reduce memory overhead, reduce APK size and improve reusability.

  • 2. View reuse

    There are a lot of duplicate sub-components, you can use ViewHolder to realize ConvertView reuse, which is basically the processing method of all spaces, such as ListView, GridView, etc.

  • 3. Object pool

    You can explicitly create an object pool in the program at the time of program design, and then implement reuse logic to reduce the repeated creation of objects, thereby reducing memory allocation and recycling

  • 4. BitMap object reuse

Use the advanced features of inBitmap in Bitmap to improve the system's allocation and release efficiency in Bitmap, not only to achieve memory reuse, but also to increase the speed of reading and writing. The inBitmap attribute can tell the Bitmap parser to try to use the existing memory area, and the new The Bitmap object will try to use the pixel data memory area occupied by the previous bitmap in the heap instead of applying for a new memory.

3. Use the best data type

  • HashMap and ArrayMap

    HashMap is a combination of creating value pairs with a hash table. When putting an element in the HashMap , first recalculate the Hash value according to the HashCode of the Key, and get the position of the element in the array according to the hash value

    Key -(hashcode)---> hash -Find the position in the array by the hash value---->If there is no element----->Put it directly to that position in the array

------>If there is an element------->The element at this position will be stored in the form of a linked list, with the new one at the beginning of the chain and the old one at the end of the chain.

Before inserting an object into the HashMap, there will be an index to the Hash array. In the index position, the value of the Key object is stored.

There is a huge conflict. When multiple objects are hashed in the same position in the array, there will be a hash conflict problem. Therefore, HashMap will configure a large array to reduce this potential conflict, and there will be other logic to prevent the linking algorithm and some conflicts from occurring.

This kind of large array is very expensive for memory. From the perspective of memory saving, it is very undesirable.

HashMap<Integer, String> hashMap = new HashMap<>();
        hashMap.put(1," ");
        hashMap.put(2," ");
        hashMap.put(3," ");
        String value = hashMap.get(1);
 

ArrayMap

AM provides the same functions as HM, avoiding excessive memory overhead

AM uses 2 small arrays. One stores the sequence list after Key Hash, and the other records Key-Value values in the order of Key, which are intertwined according to the order of the Key array.

When you need to get the Value, AM will calculate the Key to Hash value, then use the binary search to find the corresponding Index, and then find the value in another array through the pointer. If the second group of keys are inconsistent with the previous key, it is considered that there is a collision conflict. In order to solve this problem, AM takes Key as the center, expands up and down respectively, and compares and searches one by one.

This brings about a problem. As the number of objects in AM increases, the time required to access individual objects will also become longer.

AM puts the element that you want to delete to the end, moves other elements forward, or makes a copy, and deletes the desired value. Insertion needs to reconfigure the array, and move all elements after adding to ensure AM order.

In general : the insert or delete operation in AM is worse than HM, but it involves small objects and has little effect. Because of small objects, HM occupies more memory resources than AM.

AM is simpler and more efficient to traverse than HM.

HashMap<Object,Object> mHashMap =new HashMap<>();
        for (Iterator it =mHashMap.entrySet().iterator(); it.hasNext(); ) {
            Object obj = it.next();
            
        }

        ArrayMap<Object,Object> mArrayMap = new ArrayMap<>();
        for (int i = 0; i < mArrayMap.size(); i++) {
            Object key = mArrayMap.keyAt(i);
        }
        
 

Conditions for using AM:

The number of objects is less than 1000, but there are many visits, or insertion and deletion are not high

When there is a mapping container and a mapping occurs, all the mapping containers in Biqing are also AM.

Tips: In terms of performance optimization, sometimes it is necessary to use time for space, and sometimes space for time. It is most important to find the balance between the two to achieve the best results.

  • enumerate

    public final int ONE = 1;
    public final int TWO = 2;
    public int getNum(int num) throws IllegalAccessException {
        switch (num){
            case ONE:
                return 1;
            case TWO:
                return 2;
            default:
                throw  new IllegalAccessException("Unknow");
        }
    }
     

    The getNum (int num) parameter is not safe. If the input is not 1, 2 will throw an exception, that is to say, there is no restriction on the parameter, which will cause an abnormal logic in the business and increase the unsafe factor.

     public enum  NUMBER{
            NUMBER_ONE,
            NUMBER_TWO;
        };
    
        public int getNum(NUMBER number) throws IllegalAccessException {
            switch (number){
                case NUMBER_ONE:
                    return 1;
                case NUMBER_TWO:
                    return 2;
                default:
                    throw  new IllegalArgumentException("Unknow");
            }
        }
     

    It can be seen that the parameters are constrained to the enumeration, so that no more fault-tolerant processing will be done.

    The biggest advantage of enumeration is type safety , but the memory overhead is three times that of defining constants.

  • LruCache

    Cache is the least used recently. It uses strong references to store objects that need to be cached, and maintains a queue internally (actually a doubly linked list inside LinkedhashMap, which is encapsulated by LruCache and thread-safe operations are added). When one of the values is accessed , It is placed at the end of the queue. When the cache is full, the value at the head will be discarded and can be garbage collected later.

 private LruCache<String, Bitmap> mLruCache;

/**
     *  
     */
    public MyImageLoader() {
        // 1/8
        int maxMemory = (int) Runtime.getRuntime().maxMemory();
        int cacheSize = maxMemory/8;
        mLruCache = new LruCache<String, Bitmap>(cacheSize) {
            @Override
            protected int sizeOf(String key, Bitmap value) {
                //
                return value.getByteCount();
            }
        };

    }
 /**
     *   LruCache
     *
     * @param key
     * @param bitmap
     */
    public void addBitmap(String key, Bitmap bitmap) {
        if (getBitmap(key) == null) {
            mLruCache.put(key, bitmap);
        }
    }

    /**
     *  
     *
     * @param key
     * @return
     */
    public Bitmap getBitmap(String key) {
        return mLruCache.get(key);
    }

    /**
     *   Bitmap
     *
     * @param key
     */
    public void removeBitmapFromMemory(String key) {
        mLruCache.remove(key);
    }
 

Several important methods:

1.public final V get(K key)

Return the value corresponding to the key in the cache. After calling the method, the accessed value will be moved to the end of the queue.

2.public final V put(K key, V value)

According to the key storage value, the stored value will be moved to the queue Weibo.

3.protected int sizeOf(K key, V value)

Returns the size of each cache object, used to determine whether the cache is almost full, this method must be rewritten.

4.protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue)

The method that is called when a cached object is discarded. This is an empty method and can be overridden but not necessary.

Summary : Android official website recommends using LruCache as the image memory cache, which stores a certain number of strong references.

LruCache cannot be too big or too small.

related articles

"Memory Optimization"

www.jianshu.com/p/8b1d9c86f...