这两天在看HashMap的时候,被负载因子float loadFactor搞得很晕,经过一天的研究,最后理出了自己的一点个人见解。
在HashMap的底层存在着一个名字为table的Entry数组,在实例化HashMap的时候,会输入两个参数,一个是 int initCapacity(初始化数组大小,默认值是16),一个是float loadFactor(负载因子,默认值是0.75),首先会根据initCapacity计算出一个大于或者等于initCapacity且为2的幂的值capacity,例如initCapacity为15,那么capacity就会为16,还会算出一个临界值threshold,也就是capacity * loadFactor,贴出源代码
/** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and load factor. * * @param initialCapacity the initial capacity * @param loadFactor the load factor * @throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive */ public HashMap(int initialCapacity, float loadFactor) { if (initialCapacity < 0) throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal load factor: " + loadFactor); // Find a power of 2 >= initialCapacity int capacity = 1; while (capacity < initialCapacity) capacity <<= 1; this.loadFactor = loadFactor; threshold = (int)(capacity * loadFactor); table = new Entry[capacity]; init(); }
创建完HashMap之后,下面来看put方法,首先会根据key值计算出其HashCode值,然后在通过一个indexFor方法计算出此元素该存放于table数组的哪个数组之中(我猜想可能是通过对table.length的值取余的操作计算出来的),再检测此table的此坐标位置的entry链是否存在此key或者此key值,若存在,则更新此元素的value值。源代码如下:
/** * Associates the specified value with the specified key in this map. * If the map previously contained a mapping for the key, the old * value is replaced. * * @param key key with which the specified value is to be associated * @param value value to be associated with the specified key * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */ public V put(K key, V value) { if (key == null) return putForNullKey(value); int hash = hash(key.hashCode()); int i = indexFor(hash, table.length); for (Entry<K,V> e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(hash, key, value, i); return null; }
下面来看看addEntry方法,参数bucketIndex就是当前元素应该插入到entry数组的下标,先取出放在此位置的entry,然后把当前元素放入该数组中,当前元素的next指向之前取出元素,形成entry链表。(描述的不是很清楚,大概就是把新加入的entry当成头放入到数组当中,然后指向之前的链表),放入之后就去判断当前的size是否达到了threshold极限值,若达到了,将会进行扩容。源代码如下:
/** * Adds a new entry with the specified key, value and hash code to * the specified bucket. It is the responsibility of this * method to resize the table if appropriate. * * Subclass overrides this to alter the behavior of put method. */ void addEntry(int hash, K key, V value, int bucketIndex) { Entry<K,V> e = table[bucketIndex]; table[bucketIndex] = new Entry<K,V>(hash, key, value, e); if (size++ >= threshold) resize(2 * table.length); }
下面来看resize方法,方法比较简单就是生成一个新的table数组(entry数组),然后根据新的Capacity和负载因子去生成新的临界值。重点是里面有个transfer方法。源代码如下:
/** * Rehashes the contents of this map into a new array with a * larger capacity. This method is called automatically when the * number of keys in this map reaches its threshold. * * If current capacity is MAXIMUM_CAPACITY, this method does not * resize the map, but sets threshold to Integer.MAX_VALUE. * This has the effect of preventing future calls. * * @param newCapacity the new capacity, MUST be a power of two; * must be greater than current capacity unless current * capacity is MAXIMUM_CAPACITY (in which case value * is irrelevant). */ void resize(int newCapacity) { Entry[] oldTable = table; int oldCapacity = oldTable.length; if (oldCapacity == MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; return; } Entry[] newTable = new Entry[newCapacity]; transfer(newTable); table = newTable; threshold = (int)(newCapacity * loadFactor); }
因为table数组的容量增加了,那么相应的table的length也增加了,那么之前存储的元素的位置也就不一样了,比如之前的length是16,现在的length是32,那么hashCode模16和HashCode模32的结果很有可能会不一样,所以就只有重新去计算新的位置,方法是遍历数组,在遍历数组上的entry链。(此时就是所谓的rehash??)
/** * Transfers all entries from current table to newTable. */ void transfer(Entry[] newTable) { Entry[] src = table; int newCapacity = newTable.length; for (int j = 0; j < src.length; j++) { Entry<K,V> e = src[j]; if (e != null) { src[j] = null; do { Entry<K,V> next = e.next; int i = indexFor(e.hash, newCapacity); e.next = newTable[i]; newTable[i] = e; e = next; } while (e != null); } } }
总结:当负载因子较大时,去给table数组扩容的可能性就会少,所以相对占用内存较少(空间上较少),但是每条entry链上的元素会相对较多,查询的时间也会增长(时间上较多)。反之就是,负载因子较少的时候,给table数组扩容的可能性就高,那么内存空间占用就多,但是entry链上的元素就会相对较少,查出的时间也会减少。所以才有了负载因子是时间和空间上的一种折中的说法。所以设置负载因子的时候要考虑自己追求的是时间还是空间上的少。
注意:设置initCapacity的时候,尽量设置为2的幂,这样会去掉计算比initCapactity大,且为2的幂的数的运算。
疑问:感觉transfer方法会相当的耗时,是不是不去扩容会比较好?