沉淀再出发:java中的HashMap、ConcurrentHashMap和Hashtable的认识

沉淀再出发:java中的HashMap、ConcurrentHashMap和Hashtable的认识

一、前言

    很多知识在学习或者使用了之后总是会忘记的,但是如果把这些只是背后的原理理解了,并且记忆下来,这样我们就不会忘记了,常用的方法有对比记忆,将几个易混的概念放到一起进行比较,对我们的学习和生活有很大的帮助,比如hashmap和hashtab这两个概念的对比和记忆。

二、HashMap的基础知识

 2.1、HashMap的介绍

1 HashMap 是一个散列表,它存储的内容是键值对(key-value)映射。
2 HashMap 继承于AbstractMap类,实现了Map、Cloneable、java.io.Serializable接口。
3 HashMap 的实现不是同步的,这意味着它不是线程安全的。它的key、value都可以为null。此外,HashMap中的映射不是有序的。

    HashMap 的实例有两个参数影响其性能:“初始容量” 和 “加载因子”。容量是哈希表中桶的数量,初始容量是哈希表在创建时的容量。加载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度。当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表进行 rehash 操作(即重建内部数据结构),从而哈希表将具有大约两倍的桶数。通常,默认加载因子是 0.75, 这是在时间和空间成本上寻求一种折衷。加载因子过高虽然减少了空间开销,但同时也增加了查询成本(在大多数 HashMap 类的操作中,包括 get 和 put 操作,都反映了这一点)。在设置初始容量时应该考虑到映射中所需的条目数及其加载因子,以便最大限度地减少 rehash 操作次数。如果初始容量大于最大条目数除以加载因子,则不会发生 rehash 操作。

 2.2、HashMap源码解读

  1 package java.util;
  2 import java.io.*;
  3 
  4 public class HashMap<K,V>
  5     extends AbstractMap<K,V>
  6     implements Map<K,V>, Cloneable, Serializable
  7 {
  8 
  9     // 默认的初始容量是16,必须是2的幂。
 10     static final int DEFAULT_INITIAL_CAPACITY = 16;
 11 
 12     // 最大容量(必须是2的幂且小于2的30次方,传入容量过大将被这个值替换)
 13     static final int MAXIMUM_CAPACITY = 1 << 30;
 14 
 15     // 默认加载因子
 16     static final float DEFAULT_LOAD_FACTOR = 0.75f;
 17 
 18     // 存储数据的Entry数组,长度是2的幂。
 19     // HashMap是采用拉链法实现的,每一个Entry本质上是一个单向链表
 20     transient Entry[] table;
 21 
 22     // HashMap的大小,它是HashMap保存的键值对的数量
 23     transient int size;
 24 
 25     // HashMap的阈值,用于判断是否需要调整HashMap的容量(threshold = 容量*加载因子)
 26     int threshold;
 27 
 28     // 加载因子实际大小
 29     final float loadFactor;
 30 
 31     // HashMap被改变的次数
 32     transient volatile int modCount;
 33 
 34     // 指定“容量大小”和“加载因子”的构造函数
 35     public HashMap(int initialCapacity, float loadFactor) {
 36         if (initialCapacity < 0)
 37             throw new IllegalArgumentException("Illegal initial capacity: " +
 38                                                initialCapacity);
 39         // HashMap的最大容量只能是MAXIMUM_CAPACITY
 40         if (initialCapacity > MAXIMUM_CAPACITY)
 41             initialCapacity = MAXIMUM_CAPACITY;
 42         if (loadFactor <= 0 || Float.isNaN(loadFactor))
 43             throw new IllegalArgumentException("Illegal load factor: " +
 44                                                loadFactor);
 45 
 46         // 找出“大于initialCapacity”的最小的2的幂
 47         int capacity = 1;
 48         while (capacity < initialCapacity)
 49             capacity <<= 1;
 50 
 51         // 设置“加载因子”
 52         this.loadFactor = loadFactor;
 53         // 设置“HashMap阈值”,当HashMap中存储数据的数量达到threshold时,就需要将HashMap的容量加倍。
 54         threshold = (int)(capacity * loadFactor);
 55         // 创建Entry数组,用来保存数据
 56         table = new Entry[capacity];
 57         init();
 58     }
 59 
 60 
 61     // 指定“容量大小”的构造函数
 62     public HashMap(int initialCapacity) {
 63         this(initialCapacity, DEFAULT_LOAD_FACTOR);
 64     }
 65 
 66     // 默认构造函数。
 67     public HashMap() {
 68         // 设置“加载因子”
 69         this.loadFactor = DEFAULT_LOAD_FACTOR;
 70         // 设置“HashMap阈值”,当HashMap中存储数据的数量达到threshold时,就需要将HashMap的容量加倍。
 71         threshold = (int)(DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR);
 72         // 创建Entry数组,用来保存数据
 73         table = new Entry[DEFAULT_INITIAL_CAPACITY];
 74         init();
 75     }
 76 
 77     // 包含“子Map”的构造函数
 78     public HashMap(Map<? extends K, ? extends V> m) {
 79         this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
 80                       DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);
 81         // 将m中的全部元素逐个添加到HashMap中
 82         putAllForCreate(m);
 83     }
 84 
 85     static int hash(int h) {
 86         h ^= (h >>> 20) ^ (h >>> 12);
 87         return h ^ (h >>> 7) ^ (h >>> 4);
 88     }
 89 
 90     // 返回索引值
 91     // h & (length-1)保证返回值的小于length
 92     static int indexFor(int h, int length) {
 93         return h & (length-1);
 94     }
 95 
 96     public int size() {
 97         return size;
 98     }
 99 
100     public boolean isEmpty() {
101         return size == 0;
102     }
103 
104     // 获取key对应的value
105     public V get(Object key) {
106         if (key == null)
107             return getForNullKey();
108         // 获取key的hash值
109         int hash = hash(key.hashCode());
110         // 在“该hash值对应的链表”上查找“键值等于key”的元素
111         for (Entry<K,V> e = table[indexFor(hash, table.length)];
112              e != null;
113              e = e.next) {
114             Object k;
115             if (e.hash == hash && ((k = e.key) == key || key.equals(k)))
116                 return e.value;
117         }
118         return null;
119     }
120 
121     // 获取“key为null”的元素的值
122     // HashMap将“key为null”的元素存储在table[0]位置!
123     private V getForNullKey() {
124         for (Entry<K,V> e = table[0]; e != null; e = e.next) {
125             if (e.key == null)
126                 return e.value;
127         }
128         return null;
129     }
130 
131     // HashMap是否包含key
132     public boolean containsKey(Object key) {
133         return getEntry(key) != null;
134     }
135 
136     // 返回“键为key”的键值对
137     final Entry<K,V> getEntry(Object key) {
138         // 获取哈希值
139         // HashMap将“key为null”的元素存储在table[0]位置,“key不为null”的则调用hash()计算哈希值
140         int hash = (key == null) ? 0 : hash(key.hashCode());
141         // 在“该hash值对应的链表”上查找“键值等于key”的元素
142         for (Entry<K,V> e = table[indexFor(hash, table.length)];
143              e != null;
144              e = e.next) {
145             Object k;
146             if (e.hash == hash &&
147                 ((k = e.key) == key || (key != null && key.equals(k))))
148                 return e;
149         }
150         return null;
151     }
152 
153     // 将“key-value”添加到HashMap中
154     public V put(K key, V value) {
155         // 若“key为null”,则将该键值对添加到table[0]中。
156         if (key == null)
157             return putForNullKey(value);
158         // 若“key不为null”,则计算该key的哈希值,然后将其添加到该哈希值对应的链表中。
159         int hash = hash(key.hashCode());
160         int i = indexFor(hash, table.length);
161         for (Entry<K,V> e = table[i]; e != null; e = e.next) {
162             Object k;
163             // 若“该key”对应的键值对已经存在,则用新的value取代旧的value。然后退出!
164             if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
165                 V oldValue = e.value;
166                 e.value = value;
167                 e.recordAccess(this);
168                 return oldValue;
169             }
170         }
171 
172         // 若“该key”对应的键值对不存在,则将“key-value”添加到table中
173         modCount++;
174         addEntry(hash, key, value, i);
175         return null;
176     }
177 
178     // putForNullKey()的作用是将“key为null”键值对添加到table[0]位置
179     private V putForNullKey(V value) {
180         for (Entry<K,V> e = table[0]; e != null; e = e.next) {
181             if (e.key == null) {
182                 V oldValue = e.value;
183                 e.value = value;
184                 e.recordAccess(this);
185                 return oldValue;
186             }
187         }
188         // 这里的完全不会被执行到!
189         modCount++;
190         addEntry(0, null, value, 0);
191         return null;
192     }
193 
194     // 创建HashMap对应的“添加方法”,
195     // 它和put()不同。putForCreate()是内部方法,它被构造函数等调用,用来创建HashMap
196     // 而put()是对外提供的往HashMap中添加元素的方法。
197     private void putForCreate(K key, V value) {
198         int hash = (key == null) ? 0 : hash(key.hashCode());
199         int i = indexFor(hash, table.length);
200 
201         // 若该HashMap表中存在“键值等于key”的元素,则替换该元素的value值
202         for (Entry<K,V> e = table[i]; e != null; e = e.next) {
203             Object k;
204             if (e.hash == hash &&
205                 ((k = e.key) == key || (key != null && key.equals(k)))) {
206                 e.value = value;
207                 return;
208             }
209         }
210 
211         // 若该HashMap表中不存在“键值等于key”的元素,则将该key-value添加到HashMap中
212         createEntry(hash, key, value, i);
213     }
214 
215     // 将“m”中的全部元素都添加到HashMap中。
216     // 该方法被内部的构造HashMap的方法所调用。
217     private void putAllForCreate(Map<? extends K, ? extends V> m) {
218         // 利用迭代器将元素逐个添加到HashMap中
219         for (Iterator<? extends Map.Entry<? extends K, ? extends V>> i = m.entrySet().iterator(); i.hasNext(); ) {
220             Map.Entry<? extends K, ? extends V> e = i.next();
221             putForCreate(e.getKey(), e.getValue());
222         }
223     }
224 
225     // 重新调整HashMap的大小,newCapacity是调整后的单位
226     void resize(int newCapacity) {
227         Entry[] oldTable = table;
228         int oldCapacity = oldTable.length;
229         if (oldCapacity == MAXIMUM_CAPACITY) {
230             threshold = Integer.MAX_VALUE;
231             return;
232         }
233 
234         // 新建一个HashMap,将“旧HashMap”的全部元素添加到“新HashMap”中,
235         // 然后,将“新HashMap”赋值给“旧HashMap”。
236         Entry[] newTable = new Entry[newCapacity];
237         transfer(newTable);
238         table = newTable;
239         threshold = (int)(newCapacity * loadFactor);
240     }
241 
242     // 将HashMap中的全部元素都添加到newTable中
243     void transfer(Entry[] newTable) {
244         Entry[] src = table;
245         int newCapacity = newTable.length;
246         for (int j = 0; j < src.length; j++) {
247             Entry<K,V> e = src[j];
248             if (e != null) {
249                 src[j] = null;
250                 do {
251                     Entry<K,V> next = e.next;
252                     int i = indexFor(e.hash, newCapacity);
253                     e.next = newTable[i];
254                     newTable[i] = e;
255                     e = next;
256                 } while (e != null);
257             }
258         }
259     }
260 
261     // 将"m"的全部元素都添加到HashMap中
262     public void putAll(Map<? extends K, ? extends V> m) {
263         // 有效性判断
264         int numKeysToBeAdded = m.size();
265         if (numKeysToBeAdded == 0)
266             return;
267 
268         // 计算容量是否足够,
269         // 若“当前实际容量 < 需要的容量”,则将容量x2。
270         if (numKeysToBeAdded > threshold) {
271             int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);
272             if (targetCapacity > MAXIMUM_CAPACITY)
273                 targetCapacity = MAXIMUM_CAPACITY;
274             int newCapacity = table.length;
275             while (newCapacity < targetCapacity)
276                 newCapacity <<= 1;
277             if (newCapacity > table.length)
278                 resize(newCapacity);
279         }
280 
281         // 通过迭代器,将“m”中的元素逐个添加到HashMap中。
282         for (Iterator<? extends Map.Entry<? extends K, ? extends V>> i = m.entrySet().iterator(); i.hasNext(); ) {
283             Map.Entry<? extends K, ? extends V> e = i.next();
284             put(e.getKey(), e.getValue());
285         }
286     }
287 
288     // 删除“键为key”元素
289     public V remove(Object key) {
290         Entry<K,V> e = removeEntryForKey(key);
291         return (e == null ? null : e.value);
292     }
293 
294     // 删除“键为key”的元素
295     final Entry<K,V> removeEntryForKey(Object key) {
296         // 获取哈希值。若key为null,则哈希值为0;否则调用hash()进行计算
297         int hash = (key == null) ? 0 : hash(key.hashCode());
298         int i = indexFor(hash, table.length);
299         Entry<K,V> prev = table[i];
300         Entry<K,V> e = prev;
301 
302         // 删除链表中“键为key”的元素
303         // 本质是“删除单向链表中的节点”
304         while (e != null) {
305             Entry<K,V> next = e.next;
306             Object k;
307             if (e.hash == hash &&
308                 ((k = e.key) == key || (key != null && key.equals(k)))) {
309                 modCount++;
310                 size--;
311                 if (prev == e)
312                     table[i] = next;
313                 else
314                     prev.next = next;
315                 e.recordRemoval(this);
316                 return e;
317             }
318             prev = e;
319             e = next;
320         }
321 
322         return e;
323     }
324 
325     // 删除“键值对”
326     final Entry<K,V> removeMapping(Object o) {
327         if (!(o instanceof Map.Entry))
328             return null;
329 
330         Map.Entry<K,V> entry = (Map.Entry<K,V>) o;
331         Object key = entry.getKey();
332         int hash = (key == null) ? 0 : hash(key.hashCode());
333         int i = indexFor(hash, table.length);
334         Entry<K,V> prev = table[i];
335         Entry<K,V> e = prev;
336 
337         // 删除链表中的“键值对e”
338         // 本质是“删除单向链表中的节点”
339         while (e != null) {
340             Entry<K,V> next = e.next;
341             if (e.hash == hash && e.equals(entry)) {
342                 modCount++;
343                 size--;
344                 if (prev == e)
345                     table[i] = next;
346                 else
347                     prev.next = next;
348                 e.recordRemoval(this);
349                 return e;
350             }
351             prev = e;
352             e = next;
353         }
354 
355         return e;
356     }
357 
358     // 清空HashMap,将所有的元素设为null
359     public void clear() {
360         modCount++;
361         Entry[] tab = table;
362         for (int i = 0; i < tab.length; i++)
363             tab[i] = null;
364         size = 0;
365     }
366 
367     // 是否包含“值为value”的元素
368     public boolean containsValue(Object value) {
369     // 若“value为null”,则调用containsNullValue()查找
370     if (value == null)
371             return containsNullValue();
372 
373     // 若“value不为null”,则查找HashMap中是否有值为value的节点。
374     Entry[] tab = table;
375         for (int i = 0; i < tab.length ; i++)
376             for (Entry e = tab[i] ; e != null ; e = e.next)
377                 if (value.equals(e.value))
378                     return true;
379     return false;
380     }
381 
382     // 是否包含null值
383     private boolean containsNullValue() {
384     Entry[] tab = table;
385         for (int i = 0; i < tab.length ; i++)
386             for (Entry e = tab[i] ; e != null ; e = e.next)
387                 if (e.value == null)
388                     return true;
389     return false;
390     }
391 
392     // 克隆一个HashMap,并返回Object对象
393     public Object clone() {
394         HashMap<K,V> result = null;
395         try {
396             result = (HashMap<K,V>)super.clone();
397         } catch (CloneNotSupportedException e) {
398             // assert false;
399         }
400         result.table = new Entry[table.length];
401         result.entrySet = null;
402         result.modCount = 0;
403         result.size = 0;
404         result.init();
405         // 调用putAllForCreate()将全部元素添加到HashMap中
406         result.putAllForCreate(this);
407 
408         return result;
409     }
410 
411     // Entry是单向链表。
412     // 它是 “HashMap链式存储法”对应的链表。
413     // 它实现了Map.Entry 接口,即实现getKey(), getValue(), setValue(V value), equals(Object o), hashCode()这些函数
414     static class Entry<K,V> implements Map.Entry<K,V> {
415         final K key;
416         V value;
417         // 指向下一个节点
418         Entry<K,V> next;
419         final int hash;
420 
421         // 构造函数。
422         // 输入参数包括"哈希值(h)", "键(k)", "值(v)", "下一节点(n)"
423         Entry(int h, K k, V v, Entry<K,V> n) {
424             value = v;
425             next = n;
426             key = k;
427             hash = h;
428         }
429 
430         public final K getKey() {
431             return key;
432         }
433 
434         public final V getValue() {
435             return value;
436         }
437 
438         public final V setValue(V newValue) {
439             V oldValue = value;
440             value = newValue;
441             return oldValue;
442         }
443 
444         // 判断两个Entry是否相等
445         // 若两个Entry的“key”和“value”都相等,则返回true。
446         // 否则,返回false
447         public final boolean equals(Object o) {
448             if (!(o instanceof Map.Entry))
449                 return false;
450             Map.Entry e = (Map.Entry)o;
451             Object k1 = getKey();
452             Object k2 = e.getKey();
453             if (k1 == k2 || (k1 != null && k1.equals(k2))) {
454                 Object v1 = getValue();
455                 Object v2 = e.getValue();
456                 if (v1 == v2 || (v1 != null && v1.equals(v2)))
457                     return true;
458             }
459             return false;
460         }
461 
462         // 实现hashCode()
463         public final int hashCode() {
464             return (key==null   ? 0 : key.hashCode()) ^
465                    (value==null ? 0 : value.hashCode());
466         }
467 
468         public final String toString() {
469             return getKey() + "=" + getValue();
470         }
471 
472         // 当向HashMap中添加元素时,绘调用recordAccess()。
473         // 这里不做任何处理
474         void recordAccess(HashMap<K,V> m) {
475         }
476 
477         // 当从HashMap中删除元素时,绘调用recordRemoval()。
478         // 这里不做任何处理
479         void recordRemoval(HashMap<K,V> m) {
480         }
481     }
482 
483     // 新增Entry。将“key-value”插入指定位置,bucketIndex是位置索引。
484     void addEntry(int hash, K key, V value, int bucketIndex) {
485         // 保存“bucketIndex”位置的值到“e”中
486         Entry<K,V> e = table[bucketIndex];
487         // 设置“bucketIndex”位置的元素为“新Entry”,
488         // 设置“e”为“新Entry的下一个节点”
489         table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
490         // 若HashMap的实际大小 不小于 “阈值”,则调整HashMap的大小
491         if (size++ >= threshold)
492             resize(2 * table.length);
493     }
494 
495     // 创建Entry。将“key-value”插入指定位置,bucketIndex是位置索引。
496     // 它和addEntry的区别是:
497     // (01) addEntry()一般用在 新增Entry可能导致“HashMap的实际容量”超过“阈值”的情况下。
498     //   例如,我们新建一个HashMap,然后不断通过put()向HashMap中添加元素;
499     // put()是通过addEntry()新增Entry的。
500     //   在这种情况下,我们不知道何时“HashMap的实际容量”会超过“阈值”;
501     //   因此,需要调用addEntry()
502     // (02) createEntry() 一般用在 新增Entry不会导致“HashMap的实际容量”超过“阈值”的情况下。
503     //   例如,我们调用HashMap“带有Map”的构造函数,它绘将Map的全部元素添加到HashMap中;
504     // 但在添加之前,我们已经计算好“HashMap的容量和阈值”。也就是,可以确定“即使将Map中
505     // 的全部元素添加到HashMap中,都不会超过HashMap的阈值”。
506     //   此时,调用createEntry()即可。
507     void createEntry(int hash, K key, V value, int bucketIndex) {
508         // 保存“bucketIndex”位置的值到“e”中
509         Entry<K,V> e = table[bucketIndex];
510         // 设置“bucketIndex”位置的元素为“新Entry”,
511         // 设置“e”为“新Entry的下一个节点”
512         table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
513         size++;
514     }
515 
516     // HashIterator是HashMap迭代器的抽象出来的父类,实现了公共了函数。
517     // 它包含“key迭代器(KeyIterator)”、“Value迭代器(ValueIterator)”和“Entry迭代器(EntryIterator)”3个子类。
518     private abstract class HashIterator<E> implements Iterator<E> {
519         // 下一个元素
520         Entry<K,V> next;
521         // expectedModCount用于实现fast-fail机制。
522         int expectedModCount;
523         // 当前索引
524         int index;
525         // 当前元素
526         Entry<K,V> current;
527 
528         HashIterator() {
529             expectedModCount = modCount;
530             if (size > 0) { // advance to first entry
531                 Entry[] t = table;
532                 // 将next指向table中第一个不为null的元素。
533                 // 这里利用了index的初始值为0,从0开始依次向后遍历,直到找到不为null的元素就退出循环。
534                 while (index < t.length && (next = t[index++]) == null)
535                     ;
536             }
537         }
538 
539         public final boolean hasNext() {
540             return next != null;
541         }
542 
543         // 获取下一个元素
544         final Entry<K,V> nextEntry() {
545             if (modCount != expectedModCount)
546                 throw new ConcurrentModificationException();
547             Entry<K,V> e = next;
548             if (e == null)
549                 throw new NoSuchElementException();
550 
551             // 注意!!!
552             // 一个Entry就是一个单向链表
553             // 若该Entry的下一个节点不为空,就将next指向下一个节点;
554             // 否则,将next指向下一个链表(也是下一个Entry)的不为null的节点。
555             if ((next = e.next) == null) {
556                 Entry[] t = table;
557                 while (index < t.length && (next = t[index++]) == null)
558                     ;
559             }
560             current = e;
561             return e;
562         }
563 
564         // 删除当前元素
565         public void remove() {
566             if (current == null)
567                 throw new IllegalStateException();
568             if (modCount != expectedModCount)
569                 throw new ConcurrentModificationException();
570             Object k = current.key;
571             current = null;
572             HashMap.this.removeEntryForKey(k);
573             expectedModCount = modCount;
574         }
575 
576     }
577 
578     // value的迭代器
579     private final class ValueIterator extends HashIterator<V> {
580         public V next() {
581             return nextEntry().value;
582         }
583     }
584 
585     // key的迭代器
586     private final class KeyIterator extends HashIterator<K> {
587         public K next() {
588             return nextEntry().getKey();
589         }
590     }
591 
592     // Entry的迭代器
593     private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {
594         public Map.Entry<K,V> next() {
595             return nextEntry();
596         }
597     }
598 
599     // 返回一个“key迭代器”
600     Iterator<K> newKeyIterator()   {
601         return new KeyIterator();
602     }
603     // 返回一个“value迭代器”
604     Iterator<V> newValueIterator()   {
605         return new ValueIterator();
606     }
607     // 返回一个“entry迭代器”
608     Iterator<Map.Entry<K,V>> newEntryIterator()   {
609         return new EntryIterator();
610     }
611 
612     // HashMap的Entry对应的集合
613     private transient Set<Map.Entry<K,V>> entrySet = null;
614 
615     // 返回“key的集合”,实际上返回一个“KeySet对象”
616     public Set<K> keySet() {
617         Set<K> ks = keySet;
618         return (ks != null ? ks : (keySet = new KeySet()));
619     }
620 
621     // Key对应的集合
622     // KeySet继承于AbstractSet,说明该集合中没有重复的Key。
623     private final class KeySet extends AbstractSet<K> {
624         public Iterator<K> iterator() {
625             return newKeyIterator();
626         }
627         public int size() {
628             return size;
629         }
630         public boolean contains(Object o) {
631             return containsKey(o);
632         }
633         public boolean remove(Object o) {
634             return HashMap.this.removeEntryForKey(o) != null;
635         }
636         public void clear() {
637             HashMap.this.clear();
638         }
639     }
640 
641     // 返回“value集合”,实际上返回的是一个Values对象
642     public Collection<V> values() {
643         Collection<V> vs = values;
644         return (vs != null ? vs : (values = new Values()));
645     }
646 
647     // “value集合”
648     // Values继承于AbstractCollection,不同于“KeySet继承于AbstractSet”,
649     // Values中的元素能够重复。因为不同的key可以指向相同的value。
650     private final class Values extends AbstractCollection<V> {
651         public Iterator<V> iterator() {
652             return newValueIterator();
653         }
654         public int size() {
655             return size;
656         }
657         public boolean contains(Object o) {
658             return containsValue(o);
659         }
660         public void clear() {
661             HashMap.this.clear();
662         }
663     }
664 
665     // 返回“HashMap的Entry集合”
666     public Set<Map.Entry<K,V>> entrySet() {
667         return entrySet0();
668     }
669 
670     // 返回“HashMap的Entry集合”,它实际是返回一个EntrySet对象
671     private Set<Map.Entry<K,V>> entrySet0() {
672         Set<Map.Entry<K,V>> es = entrySet;
673         return es != null ? es : (entrySet = new EntrySet());
674     }
675 
676     // EntrySet对应的集合
677     // EntrySet继承于AbstractSet,说明该集合中没有重复的EntrySet。
678     private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
679         public Iterator<Map.Entry<K,V>> iterator() {
680             return newEntryIterator();
681         }
682         public boolean contains(Object o) {
683             if (!(o instanceof Map.Entry))
684                 return false;
685             Map.Entry<K,V> e = (Map.Entry<K,V>) o;
686             Entry<K,V> candidate = getEntry(e.getKey());
687             return candidate != null && candidate.equals(e);
688         }
689         public boolean remove(Object o) {
690             return removeMapping(o) != null;
691         }
692         public int size() {
693             return size;
694         }
695         public void clear() {
696             HashMap.this.clear();
697         }
698     }
699 
700     // java.io.Serializable的写入函数
701     // 将HashMap的“总的容量,实际容量,所有的Entry”都写入到输出流中
702     private void writeObject(java.io.ObjectOutputStream s)
703         throws IOException
704     {
705         Iterator<Map.Entry<K,V>> i =
706             (size > 0) ? entrySet0().iterator() : null;
707 
708         // Write out the threshold, loadfactor, and any hidden stuff
709         s.defaultWriteObject();
710 
711         // Write out number of buckets
712         s.writeInt(table.length);
713 
714         // Write out size (number of Mappings)
715         s.writeInt(size);
716 
717         // Write out keys and values (alternating)
718         if (i != null) {
719             while (i.hasNext()) {
720             Map.Entry<K,V> e = i.next();
721             s.writeObject(e.getKey());
722             s.writeObject(e.getValue());
723             }
724         }
725     }
726 
727 
728     private static final long serialVersionUID = 362498820763181265L;
729 
730     // java.io.Serializable的读取函数:根据写入方式读出
731     // 将HashMap的“总的容量,实际容量,所有的Entry”依次读出
732     private void readObject(java.io.ObjectInputStream s)
733          throws IOException, ClassNotFoundException
734     {
735         // Read in the threshold, loadfactor, and any hidden stuff
736         s.defaultReadObject();
737 
738         // Read in number of buckets and allocate the bucket array;
739         int numBuckets = s.readInt();
740         table = new Entry[numBuckets];
741 
742         init();  // Give subclass a chance to do its thing.
743 
744         // Read in size (number of Mappings)
745         int size = s.readInt();
746 
747         // Read the keys and values, and put the mappings in the HashMap
748         for (int i=0; i<size; i++) {
749             K key = (K) s.readObject();
750             V value = (V) s.readObject();
751             putForCreate(key, value);
752         }
753     }
754 
755     // 返回“HashMap总的容量”
756     int   capacity()     { return table.length; }
757     // 返回“HashMap的加载因子”
758     float loadFactor()   { return loadFactor;   }
759 }
HashMap源码解读(jdk1.6)

   但是在jdk1.8之后,HashMap加入了红黑树机制,在一个单向链表的节点大于8的情况下,就把这个链表转换成红黑树。

   同时让我们看看HashMap和Map之间的关系:

1 (1) HashMap继承于AbstractMap类,实现了Map接口。Map是"key-value键值对"接口,AbstractMap实现了"键值对"的通用函数接口。
2 (2) HashMap是通过"拉链法"实现的哈希表。它包括几个重要的成员变量:table, size, threshold, loadFactor, modCount。
3   table是一个Entry[]数组类型,而Entry实际上就是一个单向链表。哈希表的"key-value键值对"都是存储在Entry数组中的。
4   size是HashMap的大小,它是HashMap保存的键值对的数量。
5   threshold是HashMap的阈值,用于判断是否需要调整HashMap的容量。
threshold的值="容量*加载因子",当HashMap中存储数据的数量达到threshold时,就需要将HashMap的容量加倍。 6   loadFactor就是加载因子。 7   modCount是用来实现fail-fast机制的。java.util包下面的所有的集合类都是快速失败(fail-fast)的,
而java.util.concurrent包下面的所有的类都是安全失败(fail-safe)的。
快速失败的迭代器会抛出ConcurrentModificationException异常,而安全失败的迭代器永远不会抛出这样的异常。
8 当多个线程对同一个集合进行操作的时候,某线程访问集合的过程中,
该集合的内容被其他线程所改变(即其它线程通过add、remove、clear等方法,改变了modCount的值);
这时,就会抛出ConcurrentModificationException异常,产生fail-fast事件。 9 fail-fast机制,是一种错误检测机制。它只能被用来检测错误,因为JDK并不保证fail-fast机制一定会发生。
若在多线程环境下使用fail-fast机制的集合,建议使用“java.util.concurrent包下的类”去取代“java.util包下的类”。

  下面我们看看在jdk1.8之中的HashMap源码实现方式:

   1 /*
   2  * Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.
   3  * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
   4  *
   5  *
   6  *
   7  *
   8  *
   9  *
  10  *
  11  *
  12  *
  13  *
  14  *
  15  *
  16  *
  17  *
  18  *
  19  *
  20  *
  21  *
  22  *
  23  *
  24  */
  25 
  26 package java.util;
  27 
  28 import java.io.IOException;
  29 import java.io.InvalidObjectException;
  30 import java.io.Serializable;
  31 import java.lang.reflect.ParameterizedType;
  32 import java.lang.reflect.Type;
  33 import java.util.function.BiConsumer;
  34 import java.util.function.BiFunction;
  35 import java.util.function.Consumer;
  36 import java.util.function.Function;
  37 
  38 /**
  39  * Hash table based implementation of the <tt>Map</tt> interface.  This
  40  * implementation provides all of the optional map operations, and permits
  41  * <tt>null</tt> values and the <tt>null</tt> key.  (The <tt>HashMap</tt>
  42  * class is roughly equivalent to <tt>Hashtable</tt>, except that it is
  43  * unsynchronized and permits nulls.)  This class makes no guarantees as to
  44  * the order of the map; in particular, it does not guarantee that the order
  45  * will remain constant over time.
  46  *
  47  * <p>This implementation provides constant-time performance for the basic
  48  * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function
  49  * disperses the elements properly among the buckets.  Iteration over
  50  * collection views requires time proportional to the "capacity" of the
  51  * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number
  52  * of key-value mappings).  Thus, it's very important not to set the initial
  53  * capacity too high (or the load factor too low) if iteration performance is
  54  * important.
  55  *
  56  * <p>An instance of <tt>HashMap</tt> has two parameters that affect its
  57  * performance: <i>initial capacity</i> and <i>load factor</i>.  The
  58  * <i>capacity</i> is the number of buckets in the hash table, and the initial
  59  * capacity is simply the capacity at the time the hash table is created.  The
  60  * <i>load factor</i> is a measure of how full the hash table is allowed to
  61  * get before its capacity is automatically increased.  When the number of
  62  * entries in the hash table exceeds the product of the load factor and the
  63  * current capacity, the hash table is <i>rehashed</i> (that is, internal data
  64  * structures are rebuilt) so that the hash table has approximately twice the
  65  * number of buckets.
  66  *
  67  * <p>As a general rule, the default load factor (.75) offers a good
  68  * tradeoff between time and space costs.  Higher values decrease the
  69  * space overhead but increase the lookup cost (reflected in most of
  70  * the operations of the <tt>HashMap</tt> class, including
  71  * <tt>get</tt> and <tt>put</tt>).  The expected number of entries in
  72  * the map and its load factor should be taken into account when
  73  * setting its initial capacity, so as to minimize the number of
  74  * rehash operations.  If the initial capacity is greater than the
  75  * maximum number of entries divided by the load factor, no rehash
  76  * operations will ever occur.
  77  *
  78  * <p>If many mappings are to be stored in a <tt>HashMap</tt>
  79  * instance, creating it with a sufficiently large capacity will allow
  80  * the mappings to be stored more efficiently than letting it perform
  81  * automatic rehashing as needed to grow the table.  Note that using
  82  * many keys with the same {@code hashCode()} is a sure way to slow
  83  * down performance of any hash table. To ameliorate impact, when keys
  84  * are {@link Comparable}, this class may use comparison order among
  85  * keys to help break ties.
  86  *
  87  * <p><strong>Note that this implementation is not synchronized.</strong>
  88  * If multiple threads access a hash map concurrently, and at least one of
  89  * the threads modifies the map structurally, it <i>must</i> be
  90  * synchronized externally.  (A structural modification is any operation
  91  * that adds or deletes one or more mappings; merely changing the value
  92  * associated with a key that an instance already contains is not a
  93  * structural modification.)  This is typically accomplished by
  94  * synchronizing on some object that naturally encapsulates the map.
  95  *
  96  * If no such object exists, the map should be "wrapped" using the
  97  * {@link Collections#synchronizedMap Collections.synchronizedMap}
  98  * method.  This is best done at creation time, to prevent accidental
  99  * unsynchronized access to the map:<pre>
 100  *   Map m = Collections.synchronizedMap(new HashMap(...));</pre>
 101  *
 102  * <p>The iterators returned by all of this class's "collection view methods"
 103  * are <i>fail-fast</i>: if the map is structurally modified at any time after
 104  * the iterator is created, in any way except through the iterator's own
 105  * <tt>remove</tt> method, the iterator will throw a
 106  * {@link ConcurrentModificationException}.  Thus, in the face of concurrent
 107  * modification, the iterator fails quickly and cleanly, rather than risking
 108  * arbitrary, non-deterministic behavior at an undetermined time in the
 109  * future.
 110  *
 111  * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
 112  * as it is, generally speaking, impossible to make any hard guarantees in the
 113  * presence of unsynchronized concurrent modification.  Fail-fast iterators
 114  * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
 115  * Therefore, it would be wrong to write a program that depended on this
 116  * exception for its correctness: <i>the fail-fast behavior of iterators
 117  * should be used only to detect bugs.</i>
 118  *
 119  * <p>This class is a member of the
 120  * <a href="{@docRoot}/../technotes/guides/collections/index.html">
 121  * Java Collections Framework</a>.
 122  *
 123  * @param <K> the type of keys maintained by this map
 124  * @param <V> the type of mapped values
 125  *
 126  * @author  Doug Lea
 127  * @author  Josh Bloch
 128  * @author  Arthur van Hoff
 129  * @author  Neal Gafter
 130  * @see     Object#hashCode()
 131  * @see     Collection
 132  * @see     Map
 133  * @see     TreeMap
 134  * @see     Hashtable
 135  * @since   1.2
 136  */
 137 public class HashMap<K,V> extends AbstractMap<K,V>
 138     implements Map<K,V>, Cloneable, Serializable {
 139 
 140     private static final long serialVersionUID = 362498820763181265L;
 141 
 142     /*
 143      * Implementation notes.
 144      *
 145      * This map usually acts as a binned (bucketed) hash table, but
 146      * when bins get too large, they are transformed into bins of
 147      * TreeNodes, each structured similarly to those in
 148      * java.util.TreeMap. Most methods try to use normal bins, but
 149      * relay to TreeNode methods when applicable (simply by checking
 150      * instanceof a node).  Bins of TreeNodes may be traversed and
 151      * used like any others, but additionally support faster lookup
 152      * when overpopulated. However, since the vast majority of bins in
 153      * normal use are not overpopulated, checking for existence of
 154      * tree bins may be delayed in the course of table methods.
 155      *
 156      * Tree bins (i.e., bins whose elements are all TreeNodes) are
 157      * ordered primarily by hashCode, but in the case of ties, if two
 158      * elements are of the same "class C implements Comparable<C>",
 159      * type then their compareTo method is used for ordering. (We
 160      * conservatively check generic types via reflection to validate
 161      * this -- see method comparableClassFor).  The added complexity
 162      * of tree bins is worthwhile in providing worst-case O(log n)
 163      * operations when keys either have distinct hashes or are
 164      * orderable, Thus, performance degrades gracefully under
 165      * accidental or malicious usages in which hashCode() methods
 166      * return values that are poorly distributed, as well as those in
 167      * which many keys share a hashCode, so long as they are also
 168      * Comparable. (If neither of these apply, we may waste about a
 169      * factor of two in time and space compared to taking no
 170      * precautions. But the only known cases stem from poor user
 171      * programming practices that are already so slow that this makes
 172      * little difference.)
 173      *
 174      * Because TreeNodes are about twice the size of regular nodes, we
 175      * use them only when bins contain enough nodes to warrant use
 176      * (see TREEIFY_THRESHOLD). And when they become too small (due to
 177      * removal or resizing) they are converted back to plain bins.  In
 178      * usages with well-distributed user hashCodes, tree bins are
 179      * rarely used.  Ideally, under random hashCodes, the frequency of
 180      * nodes in bins follows a Poisson distribution
 181      * (http://en.wikipedia.org/wiki/Poisson_distribution) with a
 182      * parameter of about 0.5 on average for the default resizing
 183      * threshold of 0.75, although with a large variance because of
 184      * resizing granularity. Ignoring variance, the expected
 185      * occurrences of list size k are (exp(-0.5) * pow(0.5, k) /
 186      * factorial(k)). The first values are:
 187      *
 188      * 0:    0.60653066
 189      * 1:    0.30326533
 190      * 2:    0.07581633
 191      * 3:    0.01263606
 192      * 4:    0.00157952
 193      * 5:    0.00015795
 194      * 6:    0.00001316
 195      * 7:    0.00000094
 196      * 8:    0.00000006
 197      * more: less than 1 in ten million
 198      *
 199      * The root of a tree bin is normally its first node.  However,
 200      * sometimes (currently only upon Iterator.remove), the root might
 201      * be elsewhere, but can be recovered following parent links
 202      * (method TreeNode.root()).
 203      *
 204      * All applicable internal methods accept a hash code as an
 205      * argument (as normally supplied from a public method), allowing
 206      * them to call each other without recomputing user hashCodes.
 207      * Most internal methods also accept a "tab" argument, that is
 208      * normally the current table, but may be a new or old one when
 209      * resizing or converting.
 210      *
 211      * When bin lists are treeified, split, or untreeified, we keep
 212      * them in the same relative access/traversal order (i.e., field
 213      * Node.next) to better preserve locality, and to slightly
 214      * simplify handling of splits and traversals that invoke
 215      * iterator.remove. When using comparators on insertion, to keep a
 216      * total ordering (or as close as is required here) across
 217      * rebalancings, we compare classes and identityHashCodes as
 218      * tie-breakers.
 219      *
 220      * The use and transitions among plain vs tree modes is
 221      * complicated by the existence of subclass LinkedHashMap. See
 222      * below for hook methods defined to be invoked upon insertion,
 223      * removal and access that allow LinkedHashMap internals to
 224      * otherwise remain independent of these mechanics. (This also
 225      * requires that a map instance be passed to some utility methods
 226      * that may create new nodes.)
 227      *
 228      * The concurrent-programming-like SSA-based coding style helps
 229      * avoid aliasing errors amid all of the twisty pointer operations.
 230      */
 231 
 232     /**
 233      * The default initial capacity - MUST be a power of two.
 234      */
 235     static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
 236 
 237     /**
 238      * The maximum capacity, used if a higher value is implicitly specified
 239      * by either of the constructors with arguments.
 240      * MUST be a power of two <= 1<<30.
 241      */
 242     static final int MAXIMUM_CAPACITY = 1 << 30;
 243 
 244     /**
 245      * The load factor used when none specified in constructor.
 246      */
 247     static final float DEFAULT_LOAD_FACTOR = 0.75f;
 248 
 249     /**
 250      * The bin count threshold for using a tree rather than list for a
 251      * bin.  Bins are converted to trees when adding an element to a
 252      * bin with at least this many nodes. The value must be greater
 253      * than 2 and should be at least 8 to mesh with assumptions in
 254      * tree removal about conversion back to plain bins upon
 255      * shrinkage.
 256      */
 257     static final int TREEIFY_THRESHOLD = 8;
 258 
 259     /**
 260      * The bin count threshold for untreeifying a (split) bin during a
 261      * resize operation. Should be less than TREEIFY_THRESHOLD, and at
 262      * most 6 to mesh with shrinkage detection under removal.
 263      */
 264     static final int UNTREEIFY_THRESHOLD = 6;
 265 
 266     /**
 267      * The smallest table capacity for which bins may be treeified.
 268      * (Otherwise the table is resized if too many nodes in a bin.)
 269      * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
 270      * between resizing and treeification thresholds.
 271      */
 272     static final int MIN_TREEIFY_CAPACITY = 64;
 273 
 274     /**
 275      * Basic hash bin node, used for most entries.  (See below for
 276      * TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
 277      */
 278     static class Node<K,V> implements Map.Entry<K,V> {
 279         final int hash;
 280         final K key;
 281         V value;
 282         Node<K,V> next;
 283 
 284         Node(int hash, K key, V value, Node<K,V> next) {
 285             this.hash = hash;
 286             this.key = key;
 287             this.value = value;
 288             this.next = next;
 289         }
 290 
 291         public final K getKey()        { return key; }
 292         public final V getValue()      { return value; }
 293         public final String toString() { return key + "=" + value; }
 294 
 295         public final int hashCode() {
 296             return Objects.hashCode(key) ^ Objects.hashCode(value);
 297         }
 298 
 299         public final V setValue(V newValue) {
 300             V oldValue = value;
 301             value = newValue;
 302             return oldValue;
 303         }
 304 
 305         public final boolean equals(Object o) {
 306             if (o == this)
 307                 return true;
 308             if (o instanceof Map.Entry) {
 309                 Map.Entry<?,?> e = (Map.Entry<?,?>)o;
 310                 if (Objects.equals(key, e.getKey()) &&
 311                     Objects.equals(value, e.getValue()))
 312                     return true;
 313             }
 314             return false;
 315         }
 316     }
 317 
 318     /* ---------------- Static utilities -------------- */
 319 
 320     /**
 321      * Computes key.hashCode() and spreads (XORs) higher bits of hash
 322      * to lower.  Because the table uses power-of-two masking, sets of
 323      * hashes that vary only in bits above the current mask will
 324      * always collide. (Among known examples are sets of Float keys
 325      * holding consecutive whole numbers in small tables.)  So we
 326      * apply a transform that spreads the impact of higher bits
 327      * downward. There is a tradeoff between speed, utility, and
 328      * quality of bit-spreading. Because many common sets of hashes
 329      * are already reasonably distributed (so don't benefit from
 330      * spreading), and because we use trees to handle large sets of
 331      * collisions in bins, we just XOR some shifted bits in the
 332      * cheapest possible way to reduce systematic lossage, as well as
 333      * to incorporate impact of the highest bits that would otherwise
 334      * never be used in index calculations because of table bounds.
 335      */
 336     static final int hash(Object key) {
 337         int h;
 338         return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
 339     }
 340 
 341     /**
 342      * Returns x's Class if it is of the form "class C implements
 343      * Comparable<C>", else null.
 344      */
 345     static Class<?> comparableClassFor(Object x) {
 346         if (x instanceof Comparable) {
 347             Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
 348             if ((c = x.getClass()) == String.class) // bypass checks
 349                 return c;
 350             if ((ts = c.getGenericInterfaces()) != null) {
 351                 for (int i = 0; i < ts.length; ++i) {
 352                     if (((t = ts[i]) instanceof ParameterizedType) &&
 353                         ((p = (ParameterizedType)t).getRawType() ==
 354                          Comparable.class) &&
 355                         (as = p.getActualTypeArguments()) != null &&
 356                         as.length == 1 && as[0] == c) // type arg is c
 357                         return c;
 358                 }
 359             }
 360         }
 361         return null;
 362     }
 363 
 364     /**
 365      * Returns k.compareTo(x) if x matches kc (k's screened comparable
 366      * class), else 0.
 367      */
 368     @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
 369     static int compareComparables(Class<?> kc, Object k, Object x) {
 370         return (x == null || x.getClass() != kc ? 0 :
 371                 ((Comparable)k).compareTo(x));
 372     }
 373 
 374     /**
 375      * Returns a power of two size for the given target capacity.
 376      */
 377     static final int tableSizeFor(int cap) {
 378         int n = cap - 1;
 379         n |= n >>> 1;
 380         n |= n >>> 2;
 381         n |= n >>> 4;
 382         n |= n >>> 8;
 383         n |= n >>> 16;
 384         return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
 385     }
 386 
 387     /* ---------------- Fields -------------- */
 388 
 389     /**
 390      * The table, initialized on first use, and resized as
 391      * necessary. When allocated, length is always a power of two.
 392      * (We also tolerate length zero in some operations to allow
 393      * bootstrapping mechanics that are currently not needed.)
 394      */
 395     transient Node<K,V>[] table;
 396 
 397     /**
 398      * Holds cached entrySet(). Note that AbstractMap fields are used
 399      * for keySet() and values().
 400      */
 401     transient Set<Map.Entry<K,V>> entrySet;
 402 
 403     /**
 404      * The number of key-value mappings contained in this map.
 405      */
 406     transient int size;
 407 
 408     /**
 409      * The number of times this HashMap has been structurally modified
 410      * Structural modifications are those that change the number of mappings in
 411      * the HashMap or otherwise modify its internal structure (e.g.,
 412      * rehash).  This field is used to make iterators on Collection-views of
 413      * the HashMap fail-fast.  (See ConcurrentModificationException).
 414      */
 415     transient int modCount;
 416 
 417     /**
 418      * The next size value at which to resize (capacity * load factor).
 419      *
 420      * @serial
 421      */
 422     // (The javadoc description is true upon serialization.
 423     // Additionally, if the table array has not been allocated, this
 424     // field holds the initial array capacity, or zero signifying
 425     // DEFAULT_INITIAL_CAPACITY.)
 426     int threshold;
 427 
 428     /**
 429      * The load factor for the hash table.
 430      *
 431      * @serial
 432      */
 433     final float loadFactor;
 434 
 435     /* ---------------- Public operations -------------- */
 436 
 437     /**
 438      * Constructs an empty <tt>HashMap</tt> with the specified initial
 439      * capacity and load factor.
 440      *
 441      * @param  initialCapacity the initial capacity
 442      * @param  loadFactor      the load factor
 443      * @throws IllegalArgumentException if the initial capacity is negative
 444      *         or the load factor is nonpositive
 445      */
 446     public HashMap(int initialCapacity, float loadFactor) {
 447         if (initialCapacity < 0)
 448             throw new IllegalArgumentException("Illegal initial capacity: " +
 449                                                initialCapacity);
 450         if (initialCapacity > MAXIMUM_CAPACITY)
 451             initialCapacity = MAXIMUM_CAPACITY;
 452         if (loadFactor <= 0 || Float.isNaN(loadFactor))
 453             throw new IllegalArgumentException("Illegal load factor: " +
 454                                                loadFactor);
 455         this.loadFactor = loadFactor;
 456         this.threshold = tableSizeFor(initialCapacity);
 457     }
 458 
 459     /**
 460      * Constructs an empty <tt>HashMap</tt> with the specified initial
 461      * capacity and the default load factor (0.75).
 462      *
 463      * @param  initialCapacity the initial capacity.
 464      * @throws IllegalArgumentException if the initial capacity is negative.
 465      */
 466     public HashMap(int initialCapacity) {
 467         this(initialCapacity, DEFAULT_LOAD_FACTOR);
 468     }
 469 
 470     /**
 471      * Constructs an empty <tt>HashMap</tt> with the default initial capacity
 472      * (16) and the default load factor (0.75).
 473      */
 474     public HashMap() {
 475         this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
 476     }
 477 
 478     /**
 479      * Constructs a new <tt>HashMap</tt> with the same mappings as the
 480      * specified <tt>Map</tt>.  The <tt>HashMap</tt> is created with
 481      * default load factor (0.75) and an initial capacity sufficient to
 482      * hold the mappings in the specified <tt>Map</tt>.
 483      *
 484      * @param   m the map whose mappings are to be placed in this map
 485      * @throws  NullPointerException if the specified map is null
 486      */
 487     public HashMap(Map<? extends K, ? extends V> m) {
 488         this.loadFactor = DEFAULT_LOAD_FACTOR;
 489         putMapEntries(m, false);
 490     }
 491 
 492     /**
 493      * Implements Map.putAll and Map constructor
 494      *
 495      * @param m the map
 496      * @param evict false when initially constructing this map, else
 497      * true (relayed to method afterNodeInsertion).
 498      */
 499     final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
 500         int s = m.size();
 501         if (s > 0) {
 502             if (table == null) { // pre-size
 503                 float ft = ((float)s / loadFactor) + 1.0F;
 504                 int t = ((ft < (float)MAXIMUM_CAPACITY) ?
 505                          (int)ft : MAXIMUM_CAPACITY);
 506                 if (t > threshold)
 507                     threshold = tableSizeFor(t);
 508             }
 509             else if (s > threshold)
 510                 resize();
 511             for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
 512                 K key = e.getKey();
 513                 V value = e.getValue();
 514                 putVal(hash(key), key, value, false, evict);
 515             }
 516         }
 517     }
 518 
 519     /**
 520      * Returns the number of key-value mappings in this map.
 521      *
 522      * @return the number of key-value mappings in this map
 523      */
 524     public int size() {
 525         return size;
 526     }
 527 
 528     /**
 529      * Returns <tt>true</tt> if this map contains no key-value mappings.
 530      *
 531      * @return <tt>true</tt> if this map contains no key-value mappings
 532      */
 533     public boolean isEmpty() {
 534         return size == 0;
 535     }
 536 
 537     /**
 538      * Returns the value to which the specified key is mapped,
 539      * or {@code null} if this map contains no mapping for the key.
 540      *
 541      * <p>More formally, if this map contains a mapping from a key
 542      * {@code k} to a value {@code v} such that {@code (key==null ? k==null :
 543      * key.equals(k))}, then this method returns {@code v}; otherwise
 544      * it returns {@code null}.  (There can be at most one such mapping.)
 545      *
 546      * <p>A return value of {@code null} does not <i>necessarily</i>
 547      * indicate that the map contains no mapping for the key; it's also
 548      * possible that the map explicitly maps the key to {@code null}.
 549      * The {@link #containsKey containsKey} operation may be used to
 550      * distinguish these two cases.
 551      *
 552      * @see #put(Object, Object)
 553      */
 554     public V get(Object key) {
 555         Node<K,V> e;
 556         return (e = getNode(hash(key), key)) == null ? null : e.value;
 557     }
 558 
 559     /**
 560      * Implements Map.get and related methods
 561      *
 562      * @param hash hash for key
 563      * @param key the key
 564      * @return the node, or null if none
 565      */
 566     final Node<K,V> getNode(int hash, Object key) {
 567         Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
 568         if ((tab = table) != null && (n = tab.length) > 0 &&
 569             (first = tab[(n - 1) & hash]) != null) {
 570             if (first.hash == hash && // always check first node
 571                 ((k = first.key) == key || (key != null && key.equals(k))))
 572                 return first;
 573             if ((e = first.next) != null) {
 574                 if (first instanceof TreeNode)
 575                     return ((TreeNode<K,V>)first).getTreeNode(hash, key);
 576                 do {
 577                     if (e.hash == hash &&
 578                         ((k = e.key) == key || (key != null && key.equals(k))))
 579                         return e;
 580                 } while ((e = e.next) != null);
 581             }
 582         }
 583         return null;
 584     }
 585 
 586     /**
 587      * Returns <tt>true</tt> if this map contains a mapping for the
 588      * specified key.
 589      *
 590      * @param   key   The key whose presence in this map is to be tested
 591      * @return <tt>true</tt> if this map contains a mapping for the specified
 592      * key.
 593      */
 594     public boolean containsKey(Object key) {
 595         return getNode(hash(key), key) != null;
 596     }
 597 
 598     /**
 599      * Associates the specified value with the specified key in this map.
 600      * If the map previously contained a mapping for the key, the old
 601      * value is replaced.
 602      *
 603      * @param key key with which the specified value is to be associated
 604      * @param value value to be associated with the specified key
 605      * @return the previous value associated with <tt>key</tt>, or
 606      *         <tt>null</tt> if there was no mapping for <tt>key</tt>.
 607      *         (A <tt>null</tt> return can also indicate that the map
 608      *         previously associated <tt>null</tt> with <tt>key</tt>.)
 609      */
 610     public V put(K key, V value) {
 611         return putVal(hash(key), key, value, false, true);
 612     }
 613 
 614     /**
 615      * Implements Map.put and related methods
 616      *
 617      * @param hash hash for key
 618      * @param key the key
 619      * @param value the value to put
 620      * @param onlyIfAbsent if true, don't change existing value
 621      * @param evict if false, the table is in creation mode.
 622      * @return previous value, or null if none
 623      */
 624     final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
 625                    boolean evict) {
 626         Node<K,V>[] tab; Node<K,V> p; int n, i;
 627         if ((tab = table) == null || (n = tab.length) == 0)
 628             n = (tab = resize()).length;
 629         if ((p = tab[i = (n - 1) & hash]) == null)
 630             tab[i] = newNode(hash, key, value, null);
 631         else {
 632             Node<K,V> e; K k;
 633             if (p.hash == hash &&
 634                 ((k = p.key) == key || (key != null && key.equals(k))))
 635                 e = p;
 636             else if (p instanceof TreeNode)
 637                 e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
 638             else {
 639                 for (int binCount = 0; ; ++binCount) {
 640                     if ((e = p.next) == null) {
 641                         p.next = newNode(hash, key, value, null);
 642                         if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
 643                             treeifyBin(tab, hash);
 644                         break;
 645                     }
 646                     if (e.hash == hash &&
 647                         ((k = e.key) == key || (key != null && key.equals(k))))
 648                         break;
 649                     p = e;
 650                 }
 651             }
 652             if (e != null) { // existing mapping for key
 653                 V oldValue = e.value;
 654                 if (!onlyIfAbsent || oldValue == null)
 655                     e.value = value;
 656                 afterNodeAccess(e);
 657                 return oldValue;
 658             }
 659         }
 660         ++modCount;
 661         if (++size > threshold)
 662             resize();
 663         afterNodeInsertion(evict);
 664         return null;
 665     }
 666 
 667     /**
 668      * Initializes or doubles table size.  If null, allocates in
 669      * accord with initial capacity target held in field threshold.
 670      * Otherwise, because we are using power-of-two expansion, the
 671      * elements from each bin must either stay at same index, or move
 672      * with a power of two offset in the new table.
 673      *
 674      * @return the table
 675      */
 676     final Node<K,V>[] resize() {
 677         Node<K,V>[] oldTab = table;
 678         int oldCap = (oldTab == null) ? 0 : oldTab.length;
 679         int oldThr = threshold;
 680         int newCap, newThr = 0;
 681         if (oldCap > 0) {
 682             if (oldCap >= MAXIMUM_CAPACITY) {
 683                 threshold = Integer.MAX_VALUE;
 684                 return oldTab;
 685             }
 686             else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
 687                      oldCap >= DEFAULT_INITIAL_CAPACITY)
 688                 newThr = oldThr << 1; // double threshold
 689         }
 690         else if (oldThr > 0) // initial capacity was placed in threshold
 691             newCap = oldThr;
 692         else {               // zero initial threshold signifies using defaults
 693             newCap = DEFAULT_INITIAL_CAPACITY;
 694             newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
 695         }
 696         if (newThr == 0) {
 697             float ft = (float)newCap * loadFactor;
 698             newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
 699                       (int)ft : Integer.MAX_VALUE);
 700         }
 701         threshold = newThr;
 702         @SuppressWarnings({"rawtypes","unchecked"})
 703             Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
 704         table = newTab;
 705         if (oldTab != null) {
 706             for (int j = 0; j < oldCap; ++j) {
 707                 Node<K,V> e;
 708                 if ((e = oldTab[j]) != null) {
 709                     oldTab[j] = null;
 710                     if (e.next == null)
 711                         newTab[e.hash & (newCap - 1)] = e;
 712                     else if (e instanceof TreeNode)
 713                         ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
 714                     else { // preserve order
 715                         Node<K,V> loHead = null, loTail = null;
 716                         Node<K,V> hiHead = null, hiTail = null;
 717                         Node<K,V> next;
 718                         do {
 719                             next = e.next;
 720                             if ((e.hash & oldCap) == 0) {
 721                                 if (loTail == null)
 722                                     loHead = e;
 723                                 else
 724                                     loTail.next = e;
 725                                 loTail = e;
 726                             }
 727                             else {
 728                                 if (hiTail == null)
 729                                     hiHead = e;
 730                                 else
 731                                     hiTail.next = e;
 732                                 hiTail = e;
 733                             }
 734                         } while ((e = next) != null);
 735                         if (loTail != null) {
 736                             loTail.next = null;
 737                             newTab[j] = loHead;
 738                         }
 739                         if (hiTail != null) {
 740                             hiTail.next = null;
 741                             newTab[j + oldCap] = hiHead;
 742                         }
 743                     }
 744                 }
 745             }
 746         }
 747         return newTab;
 748     }
 749 
 750     /**
 751      * Replaces all linked nodes in bin at index for given hash unless
 752      * table is too small, in which case resizes instead.
 753      */
 754     final void treeifyBin(Node<K,V>[] tab, int hash) {
 755         int n, index; Node<K,V> e;
 756         if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
 757             resize();
 758         else if ((e = tab[index = (n - 1) & hash]) != null) {
 759             TreeNode<K,V> hd = null, tl = null;
 760             do {
 761                 TreeNode<K,V> p = replacementTreeNode(e, null);
 762                 if (tl == null)
 763                     hd = p;
 764                 else {
 765                     p.prev = tl;
 766                     tl.next = p;
 767                 }
 768                 tl = p;
 769             } while ((e = e.next) != null);
 770             if ((tab[index] = hd) != null)
 771                 hd.treeify(tab);
 772         }
 773     }
 774 
 775     /**
 776      * Copies all of the mappings from the specified map to this map.
 777      * These mappings will replace any mappings that this map had for
 778      * any of the keys currently in the specified map.
 779      *
 780      * @param m mappings to be stored in this map
 781      * @throws NullPointerException if the specified map is null
 782      */
 783     public void putAll(Map<? extends K, ? extends V> m) {
 784         putMapEntries(m, true);
 785     }
 786 
 787     /**
 788      * Removes the mapping for the specified key from this map if present.
 789      *
 790      * @param  key key whose mapping is to be removed from the map
 791      * @return the previous value associated with <tt>key</tt>, or
 792      *         <tt>null</tt> if there was no mapping for <tt>key</tt>.
 793      *         (A <tt>null</tt> return can also indicate that the map
 794      *         previously associated <tt>null</tt> with <tt>key</tt>.)
 795      */
 796     public V remove(Object key) {
 797         Node<K,V> e;
 798         return (e = removeNode(hash(key), key, null, false, true)) == null ?
 799             null : e.value;
 800     }
 801 
 802     /**
 803      * Implements Map.remove and related methods
 804      *
 805      * @param hash hash for key
 806      * @param key the key
 807      * @param value the value to match if matchValue, else ignored
 808      * @param matchValue if true only remove if value is equal
 809      * @param movable if false do not move other nodes while removing
 810      * @return the node, or null if none
 811      */
 812     final Node<K,V> removeNode(int hash, Object key, Object value,
 813                                boolean matchValue, boolean movable) {
 814         Node<K,V>[] tab; Node<K,V> p; int n, index;
 815         if ((tab = table) != null && (n = tab.length) > 0 &&
 816             (p = tab[index = (n - 1) & hash]) != null) {
 817             Node<K,V> node = null, e; K k; V v;
 818             if (p.hash == hash &&
 819                 ((k = p.key) == key || (key != null && key.equals(k))))
 820                 node = p;
 821             else if ((e = p.next) != null) {
 822                 if (p instanceof TreeNode)
 823                     node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
 824                 else {
 825                     do {
 826                         if (e.hash == hash &&
 827                             ((k = e.key) == key ||
 828                              (key != null && key.equals(k)))) {
 829                             node = e;
 830                             break;
 831                         }
 832                         p = e;
 833                     } while ((e = e.next) != null);
 834                 }
 835             }
 836             if (node != null && (!matchValue || (v = node.value) == value ||
 837                                  (value != null && value.equals(v)))) {
 838                 if (node instanceof TreeNode)
 839                     ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
 840                 else if (node == p)
 841                     tab[index] = node.next;
 842                 else
 843                     p.next = node.next;
 844                 ++modCount;
 845                 --size;
 846                 afterNodeRemoval(node);
 847                 return node;
 848             }
 849         }
 850         return null;
 851     }
 852 
 853     /**
 854      * Removes all of the mappings from this map.
 855      * The map will be empty after this call returns.
 856      */
 857     public void clear() {
 858         Node<K,V>[] tab;
 859         modCount++;
 860         if ((tab = table) != null && size > 0) {
 861             size = 0;
 862             for (int i = 0; i < tab.length; ++i)
 863                 tab[i] = null;
 864         }
 865     }
 866 
 867     /**
 868      * Returns <tt>true</tt> if this map maps one or more keys to the
 869      * specified value.
 870      *
 871      * @param value value whose presence in this map is to be tested
 872      * @return <tt>true</tt> if this map maps one or more keys to the
 873      *         specified value
 874      */
 875     public boolean containsValue(Object value) {
 876         Node<K,V>[] tab; V v;
 877         if ((tab = table) != null && size > 0) {
 878             for (int i = 0; i < tab.length; ++i) {
 879                 for (Node<K,V> e = tab[i]; e != null; e = e.next) {
 880                     if ((v = e.value) == value ||
 881                         (value != null && value.equals(v)))
 882                         return true;
 883                 }
 884             }
 885         }
 886         return false;
 887     }
 888 
 889     /**
 890      * Returns a {@link Set} view of the keys contained in this map.
 891      * The set is backed by the map, so changes to the map are
 892      * reflected in the set, and vice-versa.  If the map is modified
 893      * while an iteration over the set is in progress (except through
 894      * the iterator's own <tt>remove</tt> operation), the results of
 895      * the iteration are undefined.  The set supports element removal,
 896      * which removes the corresponding mapping from the map, via the
 897      * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
 898      * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
 899      * operations.  It does not support the <tt>add</tt> or <tt>addAll</tt>
 900      * operations.
 901      *
 902      * @return a set view of the keys contained in this map
 903      */
 904     public Set<K> keySet() {
 905         Set<K> ks;
 906         return (ks = keySet) == null ? (keySet = new KeySet()) : ks;
 907     }
 908 
 909     final class KeySet extends AbstractSet<K> {
 910         public final int size()                 { return size; }
 911         public final void clear()               { HashMap.this.clear(); }
 912         public final Iterator<K> iterator()     { return new KeyIterator(); }
 913         public final boolean contains(Object o) { return containsKey(o); }
 914         public final boolean remove(Object key) {
 915             return removeNode(hash(key), key, null, false, true) != null;
 916         }
 917         public final Spliterator<K> spliterator() {
 918             return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
 919         }
 920         public final void forEach(Consumer<? super K> action) {
 921             Node<K,V>[] tab;
 922             if (action == null)
 923                 throw new NullPointerException();
 924             if (size > 0 && (tab = table) != null) {
 925                 int mc = modCount;
 926                 for (int i = 0; i < tab.length; ++i) {
 927                     for (Node<K,V> e = tab[i]; e != null; e = e.next)
 928                         action.accept(e.key);
 929                 }
 930                 if (modCount != mc)
 931                     throw new ConcurrentModificationException();
 932             }
 933         }
 934     }
 935 
 936     /**
 937      * Returns a {@link Collection} view of the values contained in this map.
 938      * The collection is backed by the map, so changes to the map are
 939      * reflected in the collection, and vice-versa.  If the map is
 940      * modified while an iteration over the collection is in progress
 941      * (except through the iterator's own <tt>remove</tt> operation),
 942      * the results of the iteration are undefined.  The collection
 943      * supports element removal, which removes the corresponding
 944      * mapping from the map, via the <tt>Iterator.remove</tt>,
 945      * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
 946      * <tt>retainAll</tt> and <tt>clear</tt> operations.  It does not
 947      * support the <tt>add</tt> or <tt>addAll</tt> operations.
 948      *
 949      * @return a view of the values contained in this map
 950      */
 951     public Collection<V> values() {
 952         Collection<V> vs;
 953         return (vs = values) == null ? (values = new Values()) : vs;
 954     }
 955 
 956     final class Values extends AbstractCollection<V> {
 957         public final int size()                 { return size; }
 958         public final void clear()               { HashMap.this.clear(); }
 959         public final Iterator<V> iterator()     { return new ValueIterator(); }
 960         public final boolean contains(Object o) { return containsValue(o); }
 961         public final Spliterator<V> spliterator() {
 962             return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
 963         }
 964         public final void forEach(Consumer<? super V> action) {
 965             Node<K,V>[] tab;
 966             if (action == null)
 967                 throw new NullPointerException();
 968             if (size > 0 && (tab = table) != null) {
 969                 int mc = modCount;
 970                 for (int i = 0; i < tab.length; ++i) {
 971                     for (Node<K,V> e = tab[i]; e != null; e = e.next)
 972                         action.accept(e.value);
 973                 }
 974                 if (modCount != mc)
 975                     throw new ConcurrentModificationException();
 976             }
 977         }
 978     }
 979 
 980     /**
 981      * Returns a {@link Set} view of the mappings contained in this map.
 982      * The set is backed by the map, so changes to the map are
 983      * reflected in the set, and vice-versa.  If the map is modified
 984      * while an iteration over the set is in progress (except through
 985      * the iterator's own <tt>remove</tt> operation, or through the
 986      * <tt>setValue</tt> operation on a map entry returned by the
 987      * iterator) the results of the iteration are undefined.  The set
 988      * supports element removal, which removes the corresponding
 989      * mapping from the map, via the <tt>Iterator.remove</tt>,
 990      * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
 991      * <tt>clear</tt> operations.  It does not support the
 992      * <tt>add</tt> or <tt>addAll</tt> operations.
 993      *
 994      * @return a set view of the mappings contained in this map
 995      */
 996     public Set<Map.Entry<K,V>> entrySet() {
 997         Set<Map.Entry<K,V>> es;
 998         return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
 999     }
1000 
1001     final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
1002         public final int size()                 { return size; }
1003         public final void clear()               { HashMap.this.clear(); }
1004         public final Iterator<Map.Entry<K,V>> iterator() {
1005             return new EntryIterator();
1006         }
1007         public final boolean contains(Object o) {
1008             if (!(o instanceof Map.Entry))
1009                 return false;
1010             Map.Entry<?,?> e = (Map.Entry<?,?>) o;
1011             Object key = e.getKey();
1012             Node<K,V> candidate = getNode(hash(key), key);
1013             return candidate != null && candidate.equals(e);
1014         }
1015         public final boolean remove(Object o) {
1016             if (o instanceof Map.Entry) {
1017                 Map.Entry<?,?> e = (Map.Entry<?,?>) o;
1018                 Object key = e.getKey();
1019                 Object value = e.getValue();
1020                 return removeNode(hash(key), key, value, true, true) != null;
1021             }
1022             return false;
1023         }
1024         public final Spliterator<Map.Entry<K,V>> spliterator() {
1025             return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
1026         }
1027         public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
1028             Node<K,V>[] tab;
1029             if (action == null)
1030                 throw new NullPointerException();
1031             if (size > 0 && (tab = table) != null) {
1032                 int mc = modCount;
1033                 for (int i = 0; i < tab.length; ++i) {
1034                     for (Node<K,V> e = tab[i]; e != null; e = e.next)
1035                         action.accept(e);
1036                 }
1037                 if (modCount != mc)
1038                     throw new ConcurrentModificationException();
1039             }
1040         }
1041     }
1042 
1043     // Overrides of JDK8 Map extension methods
1044 
1045     @Override
1046     public V getOrDefault(Object key, V defaultValue) {
1047         Node<K,V> e;
1048         return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
1049     }
1050 
1051     @Override
1052     public V putIfAbsent(K key, V value) {
1053         return putVal(hash(key), key, value, true, true);
1054     }
1055 
1056     @Override
1057     public boolean remove(Object key, Object value) {
1058         return removeNode(hash(key), key, value, true, true) != null;
1059     }
1060 
1061     @Override
1062     public boolean replace(K key, V oldValue, V newValue) {
1063         Node<K,V> e; V v;
1064         if ((e = getNode(hash(key), key)) != null &&
1065             ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
1066             e.value = newValue;
1067             afterNodeAccess(e);
1068             return true;
1069         }
1070         return false;
1071     }
1072 
1073     @Override
1074     public V replace(K key, V value) {
1075         Node<K,V> e;
1076         if ((e = getNode(hash(key), key)) != null) {
1077             V oldValue = e.value;
1078             e.value = value;
1079             afterNodeAccess(e);
1080             return oldValue;
1081         }
1082         return null;
1083     }
1084 
1085     @Override
1086     public V computeIfAbsent(K key,
1087                              Function<? super K, ? extends V> mappingFunction) {
1088         if (mappingFunction == null)
1089             throw new NullPointerException();
1090         int hash = hash(key);
1091         Node<K,V>[] tab; Node<K,V> first; int n, i;
1092         int binCount = 0;
1093         TreeNode<K,V> t = null;
1094         Node<K,V> old = null;
1095         if (size > threshold || (tab = table) == null ||
1096             (n = tab.length) == 0)
1097             n = (tab = resize()).length;
1098         if ((first = tab[i = (n - 1) & hash]) != null) {
1099             if (first instanceof TreeNode)
1100                 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
1101             else {
1102                 Node<K,V> e = first; K k;
1103                 do {
1104                     if (e.hash == hash &&
1105                         ((k = e.key) == key || (key != null && key.equals(k)))) {
1106                         old = e;
1107                         break;
1108                     }
1109                     ++binCount;
1110                 } while ((e = e.next) != null);
1111             }
1112             V oldValue;
1113             if (old != null && (oldValue = old.value) != null) {
1114                 afterNodeAccess(old);
1115                 return oldValue;
1116             }
1117         }
1118         V v = mappingFunction.apply(key);
1119         if (v == null) {
1120             return null;
1121         } else if (old != null) {
1122             old.value = v;
1123             afterNodeAccess(old);
1124             return v;
1125         }
1126         else if (t != null)
1127             t.putTreeVal(this, tab, hash, key, v);
1128         else {
1129             tab[i] = newNode(hash, key, v, first);
1130             if (binCount >= TREEIFY_THRESHOLD - 1)
1131                 treeifyBin(tab, hash);
1132         }
1133         ++modCount;
1134         ++size;
1135         afterNodeInsertion(true);
1136         return v;
1137     }
1138 
1139     public V computeIfPresent(K key,
1140                               BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
1141         if (remappingFunction == null)
1142             throw new NullPointerException();
1143         Node<K,V> e; V oldValue;
1144         int hash = hash(key);
1145         if ((e = getNode(hash, key)) != null &&
1146             (oldValue = e.value) != null) {
1147             V v = remappingFunction.apply(key, oldValue);
1148             if (v != null) {
1149                 e.value = v;
1150                 afterNodeAccess(e);
1151                 return v;
1152             }
1153             else
1154                 removeNode(hash, key, null, false, true);
1155         }
1156         return null;
1157     }
1158 
1159     @Override
1160     public V compute(K key,
1161                      BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
1162         if (remappingFunction == null)
1163             throw new NullPointerException();
1164         int hash = hash(key);
1165         Node<K,V>[] tab; Node<K,V> first; int n, i;
1166         int binCount = 0;
1167         TreeNode<K,V> t = null;
1168         Node<K,V> old = null;
1169         if (size > threshold || (tab = table) == null ||
1170             (n = tab.length) == 0)
1171             n = (tab = resize()).length;
1172         if ((first = tab[i = (n - 1) & hash]) != null) {
1173             if (first instanceof TreeNode)
1174                 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
1175             else {
1176                 Node<K,V> e = first; K k;
1177                 do {
1178                     if (e.hash == hash &&
1179                         ((k = e.key) == key || (key != null && key.equals(k)))) {
1180                         old = e;
1181                         break;
1182                     }
1183                     ++binCount;
1184                 } while ((e = e.next) != null);
1185             }
1186         }
1187         V oldValue = (old == null) ? null : old.value;
1188         V v = remappingFunction.apply(key, oldValue);
1189         if (old != null) {
1190             if (v != null) {
1191                 old.value = v;
1192                 afterNodeAccess(old);
1193             }
1194             else
1195                 removeNode(hash, key, null, false, true);
1196         }
1197         else if (v != null) {
1198             if (t != null)
1199                 t.putTreeVal(this, tab, hash, key, v);
1200             else {
1201                 tab[i] = newNode(hash, key, v, first);
1202                 if (binCount >= TREEIFY_THRESHOLD - 1)
1203                     treeifyBin(tab, hash);
1204             }
1205             ++modCount;
1206             ++size;
1207             afterNodeInsertion(true);
1208         }
1209         return v;
1210     }
1211 
1212     @Override
1213     public V merge(K key, V value,
1214                    BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
1215         if (value == null)
1216             throw new NullPointerException();
1217         if (remappingFunction == null)
1218             throw new NullPointerException();
1219         int hash = hash(key);
1220         Node<K,V>[] tab; Node<K,V> first; int n, i;
1221         int binCount = 0;
1222         TreeNode<K,V> t = null;
1223         Node<K,V> old = null;
1224         if (size > threshold || (tab = table) == null ||
1225             (n = tab.length) == 0)
1226             n = (tab = resize()).length;
1227         if ((first = tab[i = (n - 1) & hash]) != null) {
1228             if (first instanceof TreeNode)
1229                 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
1230             else {
1231                 Node<K,V> e = first; K k;
1232                 do {
1233                     if (e.hash == hash &&
1234                         ((k = e.key) == key || (key != null && key.equals(k)))) {
1235                         old = e;
1236                         break;
1237                     }
1238                     ++binCount;
1239                 } while ((e = e.next) != null);
1240             }
1241         }
1242         if (old != null) {
1243             V v;
1244             if (old.value != null)
1245                 v = remappingFunction.apply(old.value, value);
1246             else
1247                 v = value;
1248             if (v != null) {
1249                 old.value = v;
1250                 afterNodeAccess(old);
1251             }
1252             else
1253                 removeNode(hash, key, null, false, true);
1254             return v;
1255         }
1256         if (value != null) {
1257             if (t != null)
1258                 t.putTreeVal(this, tab, hash, key, value);
1259             else {
1260                 tab[i] = newNode(hash, key, value, first);
1261                 if (binCount >= TREEIFY_THRESHOLD - 1)
1262                     treeifyBin(tab, hash);
1263             }
1264             ++modCount;
1265             ++size;
1266             afterNodeInsertion(true);
1267         }
1268         return value;
1269     }
1270 
1271     @Override
1272     public void forEach(BiConsumer<? super K, ? super V> action) {
1273         Node<K,V>[] tab;
1274         if (action == null)
1275             throw new NullPointerException();
1276         if (size > 0 && (tab = table) != null) {
1277             int mc = modCount;
1278             for (int i = 0; i < tab.length; ++i) {
1279                 for (Node<K,V> e = tab[i]; e != null; e = e.next)
1280                     action.accept(e.key, e.value);
1281             }
1282             if (modCount != mc)
1283                 throw new ConcurrentModificationException();
1284         }
1285     }
1286 
1287     @Override
1288     public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
1289         Node<K,V>[] tab;
1290         if (function == null)
1291             throw new NullPointerException();
1292         if (size > 0 && (tab = table) != null) {
1293             int mc = modCount;
1294             for (int i = 0; i < tab.length; ++i) {
1295                 for (Node<K,V> e = tab[i]; e != null; e = e.next) {
1296                     e.value = function.apply(e.key, e.value);
1297                 }
1298             }
1299             if (modCount != mc)
1300                 throw new ConcurrentModificationException();
1301         }
1302     }
1303 
1304     /* ------------------------------------------------------------ */
1305     // Cloning and serialization
1306 
1307     /**
1308      * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
1309      * values themselves are not cloned.
1310      *
1311      * @return a shallow copy of this map
1312      */
1313     @SuppressWarnings("unchecked")
1314     @Override
1315     public Object clone() {
1316         HashMap<K,V> result;
1317         try {
1318             result = (HashMap<K,V>)super.clone();
1319         } catch (CloneNotSupportedException e) {
1320             // this shouldn't happen, since we are Cloneable
1321             throw new InternalError(e);
1322         }
1323         result.reinitialize();
1324         result.putMapEntries(this, false);
1325         return result;
1326     }
1327 
1328     // These methods are also used when serializing HashSets
1329     final float loadFactor() { return loadFactor; }
1330     final int capacity() {
1331         return (table != null) ? table.length :
1332             (threshold > 0) ? threshold :
1333             DEFAULT_INITIAL_CAPACITY;
1334     }
1335 
1336     /**
1337      * Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,
1338      * serialize it).
1339      *
1340      * @serialData The <i>capacity</i> of the HashMap (the length of the
1341      *             bucket array) is emitted (int), followed by the
1342      *             <i>size</i> (an int, the number of key-value
1343      *             mappings), followed by the key (Object) and value (Object)
1344      *             for each key-value mapping.  The key-value mappings are
1345      *             emitted in no particular order.
1346      */
1347     private void writeObject(java.io.ObjectOutputStream s)
1348         throws IOException {
1349         int buckets = capacity();
1350         // Write out the threshold, loadfactor, and any hidden stuff
1351         s.defaultWriteObject();
1352         s.writeInt(buckets);
1353         s.writeInt(size);
1354         internalWriteEntries(s);
1355     }
1356 
1357     /**
1358      * Reconstitute the {@code HashMap} instance from a stream (i.e.,
1359      * deserialize it).
1360      */
1361     private void readObject(java.io.ObjectInputStream s)
1362         throws IOException, ClassNotFoundException {
1363         // Read in the threshold (ignored), loadfactor, and any hidden stuff
1364         s.defaultReadObject();
1365         reinitialize();
1366         if (loadFactor <= 0 || Float.isNaN(loadFactor))
1367             throw new InvalidObjectException("Illegal load factor: " +
1368                                              loadFactor);
1369         s.readInt();                // Read and ignore number of buckets
1370         int mappings = s.readInt(); // Read number of mappings (size)
1371         if (mappings < 0)
1372             throw new InvalidObjectException("Illegal mappings count: " +
1373                                              mappings);
1374         else if (mappings > 0) { // (if zero, use defaults)
1375             // Size the table using given load factor only if within
1376             // range of 0.25...4.0
1377             float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
1378             float fc = (float)mappings / lf + 1.0f;
1379             int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
1380                        DEFAULT_INITIAL_CAPACITY :
1381                        (fc >= MAXIMUM_CAPACITY) ?
1382                        MAXIMUM_CAPACITY :
1383                        tableSizeFor((int)fc));
1384             float ft = (float)cap * lf;
1385             threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
1386                          (int)ft : Integer.MAX_VALUE);
1387             @SuppressWarnings({"rawtypes","unchecked"})
1388                 Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
1389             table = tab;
1390 
1391             // Read the keys and values, and put the mappings in the HashMap
1392             for (int i = 0; i < mappings; i++) {
1393                 @SuppressWarnings("unchecked")
1394                     K key = (K) s.readObject();
1395                 @SuppressWarnings("unchecked")
1396                     V value = (V) s.readObject();
1397                 putVal(hash(key), key, value, false, false);
1398             }
1399         }
1400     }
1401 
1402     /* ------------------------------------------------------------ */
1403     // iterators
1404 
1405     abstract class HashIterator {
1406         Node<K,V> next;        // next entry to return
1407         Node<K,V> current;     // current entry
1408         int expectedModCount;  // for fast-fail
1409         int index;             // current slot
1410 
1411         HashIterator() {
1412             expectedModCount = modCount;
1413             Node<K,V>[] t = table;
1414             current = next = null;
1415             index = 0;
1416             if (t != null && size > 0) { // advance to first entry
1417                 do {} while (index < t.length && (next = t[index++]) == null);
1418             }
1419         }
1420 
1421         public final boolean hasNext() {
1422             return next != null;
1423         }
1424 
1425         final Node<K,V> nextNode() {
1426             Node<K,V>[] t;
1427             Node<K,V> e = next;
1428             if (modCount != expectedModCount)
1429                 throw new ConcurrentModificationException();
1430             if (e == null)
1431                 throw new NoSuchElementException();
1432             if ((next = (current = e).next) == null && (t = table) != null) {
1433                 do {} while (index < t.length && (next = t[index++]) == null);
1434             }
1435             return e;
1436         }
1437 
1438         public final void remove() {
1439             Node<K,V> p = current;
1440             if (p == null)
1441                 throw new IllegalStateException();
1442             if (modCount != expectedModCount)
1443                 throw new ConcurrentModificationException();
1444             current = null;
1445             K key = p.key;
1446             removeNode(hash(key), key, null, false, false);
1447             expectedModCount = modCount;
1448         }
1449     }
1450 
1451     final class KeyIterator extends HashIterator
1452         implements Iterator<K> {
1453         public final K next() { return nextNode().key; }
1454     }
1455 
1456     final class ValueIterator extends HashIterator
1457         implements Iterator<V> {
1458         public final V next() { return nextNode().value; }
1459     }
1460 
1461     final class EntryIterator extends HashIterator
1462         implements Iterator<Map.Entry<K,V>> {
1463         public final Map.Entry<K,V> next() { return nextNode(); }
1464     }
1465 
1466     /* ------------------------------------------------------------ */
1467     // spliterators
1468 
1469     static class HashMapSpliterator<K,V> {
1470         final HashMap<K,V> map;
1471         Node<K,V> current;          // current node
1472         int index;                  // current index, modified on advance/split
1473         int fence;                  // one past last index
1474         int est;                    // size estimate
1475         int expectedModCount;       // for comodification checks
1476 
1477         HashMapSpliterator(HashMap<K,V> m, int origin,
1478                            int fence, int est,
1479                            int expectedModCount) {
1480             this.map = m;
1481             this.index = origin;
1482             this.fence = fence;
1483             this.est = est;
1484             this.expectedModCount = expectedModCount;
1485         }
1486 
1487         final int getFence() { // initialize fence and size on first use
1488             int hi;
1489             if ((hi = fence) < 0) {
1490                 HashMap<K,V> m = map;
1491                 est = m.size;
1492                 expectedModCount = m.modCount;
1493                 Node<K,V>[] tab = m.table;
1494                 hi = fence = (tab == null) ? 0 : tab.length;
1495             }
1496             return hi;
1497         }
1498 
1499         public final long estimateSize() {
1500             getFence(); // force init
1501             return (long) est;
1502         }
1503     }
1504 
1505     static final class KeySpliterator<K,V>
1506         extends HashMapSpliterator<K,V>
1507         implements Spliterator<K> {
1508         KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,
1509                        int expectedModCount) {
1510             super(m, origin, fence, est, expectedModCount);
1511         }
1512 
1513         public KeySpliterator<K,V> trySplit() {
1514             int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
1515             return (lo >= mid || current != null) ? null :
1516                 new KeySpliterator<>(map, lo, index = mid, est >>>= 1,
1517                                         expectedModCount);
1518         }
1519 
1520         public void forEachRemaining(Consumer<? super K> action) {
1521             int i, hi, mc;
1522             if (action == null)
1523                 throw new NullPointerException();
1524             HashMap<K,V> m = map;
1525             Node<K,V>[] tab = m.table;
1526             if ((hi = fence) < 0) {
1527                 mc = expectedModCount = m.modCount;
1528                 hi = fence = (tab == null) ? 0 : tab.length;
1529             }
1530             else
1531                 mc = expectedModCount;
1532             if (tab != null && tab.length >= hi &&
1533                 (i = index) >= 0 && (i < (index = hi) || current != null)) {
1534                 Node<K,V> p = current;
1535                 current = null;
1536                 do {
1537                     if (p == null)
1538                         p = tab[i++];
1539                     else {
1540                         action.accept(p.key);
1541                         p = p.next;
1542                     }
1543                 } while (p != null || i < hi);
1544                 if (m.modCount != mc)
1545                     throw new ConcurrentModificationException();
1546             }
1547         }
1548 
1549         public boolean tryAdvance(Consumer<? super K> action) {
1550             int hi;
1551             if (action == null)
1552                 throw new NullPointerException();
1553             Node<K,V>[] tab = map.table;
1554             if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
1555                 while (current != null || index < hi) {
1556                     if (current == null)
1557                         current = tab[index++];
1558                     else {
1559                         K k = current.key;
1560                         current = current.next;
1561                         action.accept(k);
1562                         if (map.modCount != expectedModCount)
1563                             throw new ConcurrentModificationException();
1564                         return true;
1565                     }
1566                 }
1567             }
1568             return false;
1569         }
1570 
1571         public int characteristics() {
1572             return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
1573                 Spliterator.DISTINCT;
1574         }
1575     }
1576 
1577     static final class ValueSpliterator<K,V>
1578         extends HashMapSpliterator<K,V>
1579         implements Spliterator<V> {
1580         ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,
1581                          int expectedModCount) {
1582             super(m, origin, fence, est, expectedModCount);
1583         }
1584 
1585         public ValueSpliterator<K,V> trySplit() {
1586             int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
1587             return (lo >= mid || current != null) ? null :
1588                 new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,
1589                                           expectedModCount);
1590         }
1591 
1592         public void forEachRemaining(Consumer<? super V> action) {
1593             int i, hi, mc;
1594             if (action == null)
1595                 throw new NullPointerException();
1596             HashMap<K,V> m = map;
1597             Node<K,V>[] tab = m.table;
1598             if ((hi = fence) < 0) {
1599                 mc = expectedModCount = m.modCount;
1600                 hi = fence = (tab == null) ? 0 : tab.length;
1601             }
1602             else
1603                 mc = expectedModCount;
1604             if (tab != null && tab.length >= hi &&
1605                 (i = index) >= 0 && (i < (index = hi) || current != null)) {
1606                 Node<K,V> p = current;
1607                 current = null;
1608                 do {
1609                     if (p == null)
1610                         p = tab[i++];
1611                     else {
1612                         action.accept(p.value);
1613                         p = p.next;
1614                     }
1615                 } while (p != null || i < hi);
1616                 if (m.modCount != mc)
1617                     throw new ConcurrentModificationException();
1618             }
1619         }
1620 
1621         public boolean tryAdvance(Consumer<? super V> action) {
1622             int hi;
1623             if (action == null)
1624                 throw new NullPointerException();
1625             Node<K,V>[] tab = map.table;
1626             if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
1627                 while (current != null || index < hi) {
1628                     if (current == null)
1629                         current = tab[index++];
1630                     else {
1631                         V v = current.value;
1632                         current = current.next;
1633                         action.accept(v);
1634                         if (map.modCount != expectedModCount)
1635                             throw new ConcurrentModificationException();
1636                         return true;
1637                     }
1638                 }
1639             }
1640             return false;
1641         }
1642 
1643         public int characteristics() {
1644             return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
1645         }
1646     }
1647 
1648     static final class EntrySpliterator<K,V>
1649         extends HashMapSpliterator<K,V>
1650         implements Spliterator<Map.Entry<K,V>> {
1651         EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,
1652                          int expectedModCount) {
1653             super(m, origin, fence, est, expectedModCount);
1654         }
1655 
1656         public EntrySpliterator<K,V> trySplit() {
1657             int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
1658             return (lo >= mid || current != null) ? null :
1659                 new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,
1660                                           expectedModCount);
1661         }
1662 
1663         public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {
1664             int i, hi, mc;
1665             if (action == null)
1666                 throw new NullPointerException();
1667             HashMap<K,V> m = map;
1668             Node<K,V>[] tab = m.table;
1669             if ((hi = fence) < 0) {
1670                 mc = expectedModCount = m.modCount;
1671                 hi = fence = (tab == null) ? 0 : tab.length;
1672             }
1673             else
1674                 mc = expectedModCount;
1675             if (tab != null && tab.length >= hi &&
1676                 (i = index) >= 0 && (i < (index = hi) || current != null)) {
1677                 Node<K,V> p = current;
1678                 current = null;
1679                 do {
1680                     if (p == null)
1681                         p = tab[i++];
1682                     else {
1683                         action.accept(p);
1684                         p = p.next;
1685                     }
1686                 } while (p != null || i < hi);
1687                 if (m.modCount != mc)
1688                     throw new ConcurrentModificationException();
1689             }
1690         }
1691 
1692         public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {
1693             int hi;
1694             if (action == null)
1695                 throw new NullPointerException();
1696             Node<K,V>[] tab = map.table;
1697             if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
1698                 while (current != null || index < hi) {
1699                     if (current == null)
1700                         current = tab[index++];
1701                     else {
1702                         Node<K,V> e = current;
1703                         current = current.next;
1704                         action.accept(e);
1705                         if (map.modCount != expectedModCount)
1706                             throw new ConcurrentModificationException();
1707                         return true;
1708                     }
1709                 }
1710             }
1711             return false;
1712         }
1713 
1714         public int characteristics() {
1715             return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
1716                 Spliterator.DISTINCT;
1717         }
1718     }
1719 
1720     /* ------------------------------------------------------------ */
1721     // LinkedHashMap support
1722 
1723 
1724     /*
1725      * The following package-protected methods are designed to be
1726      * overridden by LinkedHashMap, but not by any other subclass.
1727      * Nearly all other internal methods are also package-protected
1728      * but are declared final, so can be used by LinkedHashMap, view
1729      * classes, and HashSet.
1730      */
1731 
1732     // Create a regular (non-tree) node
1733     Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
1734         return new Node<>(hash, key, value, next);
1735     }
1736 
1737     // For conversion from TreeNodes to plain nodes
1738     Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {
1739         return new Node<>(p.hash, p.key, p.value, next);
1740     }
1741 
1742     // Create a tree bin node
1743     TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {
1744         return new TreeNode<>(hash, key, value, next);
1745     }
1746 
1747     // For treeifyBin
1748     TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {
1749         return new TreeNode<>(p.hash, p.key, p.value, next);
1750     }
1751 
1752     /**
1753      * Reset to initial default state.  Called by clone and readObject.
1754      */
1755     void reinitialize() {
1756         table = null;
1757         entrySet = null;
1758         keySet = null;
1759         values = null;
1760         modCount = 0;
1761         threshold = 0;
1762         size = 0;
1763     }
1764 
1765     // Callbacks to allow LinkedHashMap post-actions
1766     void afterNodeAccess(Node<K,V> p) { }
1767     void afterNodeInsertion(boolean evict) { }
1768     void afterNodeRemoval(Node<K,V> p) { }
1769 
1770     // Called only from writeObject, to ensure compatible ordering.
1771     void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
1772         Node<K,V>[] tab;
1773         if (size > 0 && (tab = table) != null) {
1774             for (int i = 0; i < tab.length; ++i) {
1775                 for (Node<K,V> e = tab[i]; e != null; e = e.next) {
1776                     s.writeObject(e.key);
1777                     s.writeObject(e.value);
1778                 }
1779             }
1780         }
1781     }
1782 
1783     /* ------------------------------------------------------------ */
1784     // Tree bins
1785 
1786     /**
1787      * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
1788      * extends Node) so can be used as extension of either regular or
1789      * linked node.
1790      */
1791     static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
1792         TreeNode<K,V> parent;  // red-black tree links
1793         TreeNode<K,V> left;
1794         TreeNode<K,V> right;
1795         TreeNode<K,V> prev;    // needed to unlink next upon deletion
1796         boolean red;
1797         TreeNode(int hash, K key, V val, Node<K,V> next) {
1798             super(hash, key, val, next);
1799         }
1800 
1801         /**
1802          * Returns root of tree containing this node.
1803          */
1804         final TreeNode<K,V> root() {
1805             for (TreeNode<K,V> r = this, p;;) {
1806                 if ((p = r.parent) == null)
1807                     return r;
1808                 r = p;
1809             }
1810         }
1811 
1812         /**
1813          * Ensures that the given root is the first node of its bin.
1814          */
1815         static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {
1816             int n;
1817             if (root != null && tab != null && (n = tab.length) > 0) {
1818                 int index = (n - 1) & root.hash;
1819                 TreeNode<K,V> first = (TreeNode<K,V>)tab[index];
1820                 if (root != first) {
1821                     Node<K,V> rn;
1822                     tab[index] = root;
1823                     TreeNode<K,V> rp = root.prev;
1824                     if ((rn = root.next) != null)
1825                         ((TreeNode<K,V>)rn).prev = rp;
1826                     if (rp != null)
1827                         rp.next = rn;
1828                     if (first != null)
1829                         first.prev = root;
1830                     root.next = first;
1831                     root.prev = null;
1832                 }
1833                 assert checkInvariants(root);
1834             }
1835         }
1836 
1837         /**
1838          * Finds the node starting at root p with the given hash and key.
1839          * The kc argument caches comparableClassFor(key) upon first use
1840          * comparing keys.
1841          */
1842         final TreeNode<K,V> find(int h, Object k, Class<?> kc) {
1843             TreeNode<K,V> p = this;
1844             do {
1845                 int ph, dir; K pk;
1846                 TreeNode<K,V> pl = p.left, pr = p.right, q;
1847                 if ((ph = p.hash) > h)
1848                     p = pl;
1849                 else if (ph < h)
1850                     p = pr;
1851                 else if ((pk = p.key) == k || (k != null && k.equals(pk)))
1852                     return p;
1853                 else if (pl == null)
1854                     p = pr;
1855                 else if (pr == null)
1856                     p = pl;
1857                 else if ((kc != null ||
1858                           (kc = comparableClassFor(k)) != null) &&
1859                          (dir = compareComparables(kc, k, pk)) != 0)
1860                     p = (dir < 0) ? pl : pr;
1861                 else if ((q = pr.find(h, k, kc)) != null)
1862                     return q;
1863                 else
1864                     p = pl;
1865             } while (p != null);
1866             return null;
1867         }
1868 
1869         /**
1870          * Calls find for root node.
1871          */
1872         final TreeNode<K,V> getTreeNode(int h, Object k) {
1873             return ((parent != null) ? root() : this).find(h, k, null);
1874         }
1875 
1876         /**
1877          * Tie-breaking utility for ordering insertions when equal
1878          * hashCodes and non-comparable. We don't require a total
1879          * order, just a consistent insertion rule to maintain
1880          * equivalence across rebalancings. Tie-breaking further than
1881          * necessary simplifies testing a bit.
1882          */
1883         static int tieBreakOrder(Object a, Object b) {
1884             int d;
1885             if (a == null || b == null ||
1886                 (d = a.getClass().getName().
1887                  compareTo(b.getClass().getName())) == 0)
1888                 d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
1889                      -1 : 1);
1890             return d;
1891         }
1892 
1893         /**
1894          * Forms tree of the nodes linked from this node.
1895          * @return root of tree
1896          */
1897         final void treeify(Node<K,V>[] tab) {
1898             TreeNode<K,V> root = null;
1899             for (TreeNode<K,V> x = this, next; x != null; x = next) {
1900                 next = (TreeNode<K,V>)x.next;
1901                 x.left = x.right = null;
1902                 if (root == null) {
1903                     x.parent = null;
1904                     x.red = false;
1905                     root = x;
1906                 }
1907                 else {
1908                     K k = x.key;
1909                     int h = x.hash;
1910                     Class<?> kc = null;
1911                     for (TreeNode<K,V> p = root;;) {
1912                         int dir, ph;
1913                         K pk = p.key;
1914                         if ((ph = p.hash) > h)
1915                             dir = -1;
1916                         else if (ph < h)
1917                             dir = 1;
1918                         else if ((kc == null &&
1919                                   (kc = comparableClassFor(k)) == null) ||
1920                                  (dir = compareComparables(kc, k, pk)) == 0)
1921                             dir = tieBreakOrder(k, pk);
1922 
1923                         TreeNode<K,V> xp = p;
1924                         if ((p = (dir <= 0) ? p.left : p.right) == null) {
1925                             x.parent = xp;
1926                             if (dir <= 0)
1927                                 xp.left = x;
1928                             else
1929                                 xp.right = x;
1930                             root = balanceInsertion(root, x);
1931                             break;
1932                         }
1933                     }
1934                 }
1935             }
1936             moveRootToFront(tab, root);
1937         }
1938 
1939         /**
1940          * Returns a list of non-TreeNodes replacing those linked from
1941          * this node.
1942          */
1943         final Node<K,V> untreeify(HashMap<K,V> map) {
1944             Node<K,V> hd = null, tl = null;
1945             for (Node<K,V> q = this; q != null; q = q.next) {
1946                 Node<K,V> p = map.replacementNode(q, null);
1947                 if (tl == null)
1948                     hd = p;
1949                 else
1950                     tl.next = p;
1951                 tl = p;
1952             }
1953             return hd;
1954         }
1955 
1956         /**
1957          * Tree version of putVal.
1958          */
1959         final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
1960                                        int h, K k, V v) {
1961             Class<?> kc = null;
1962             boolean searched = false;
1963             TreeNode<K,V> root = (parent != null) ? root() : this;
1964             for (TreeNode<K,V> p = root;;) {
1965                 int dir, ph; K pk;
1966                 if ((ph = p.hash) > h)
1967                     dir = -1;
1968                 else if (ph < h)
1969                     dir = 1;
1970                 else if ((pk = p.key) == k || (k != null && k.equals(pk)))
1971                     return p;
1972                 else if ((kc == null &&
1973                           (kc = comparableClassFor(k)) == null) ||
1974                          (dir = compareComparables(kc, k, pk)) == 0) {
1975                     if (!searched) {
1976                         TreeNode<K,V> q, ch;
1977                         searched = true;
1978                         if (((ch = p.left) != null &&
1979                              (q = ch.find(h, k, kc)) != null) ||
1980                             ((ch = p.right) != null &&
1981                              (q = ch.find(h, k, kc)) != null))
1982                             return q;
1983                     }
1984                     dir = tieBreakOrder(k, pk);
1985                 }
1986 
1987                 TreeNode<K,V> xp = p;
1988                 if ((p = (dir <= 0) ? p.left : p.right) == null) {
1989                     Node<K,V> xpn = xp.next;
1990                     TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
1991                     if (dir <= 0)
1992                         xp.left = x;
1993                     else
1994                         xp.right = x;
1995                     xp.next = x;
1996                     x.parent = x.prev = xp;
1997                     if (xpn != null)
1998                         ((TreeNode<K,V>)xpn).prev = x;
1999                     moveRootToFront(tab, balanceInsertion(root, x));
2000                     return null;
2001                 }
2002             }
2003         }
2004 
2005         /**
2006          * Removes the given node, that must be present before this call.
2007          * This is messier than typical red-black deletion code because we
2008          * cannot swap the contents of an interior node with a leaf
2009          * successor that is pinned by "next" pointers that are accessible
2010          * independently during traversal. So instead we swap the tree
2011          * linkages. If the current tree appears to have too few nodes,
2012          * the bin is converted back to a plain bin. (The test triggers
2013          * somewhere between 2 and 6 nodes, depending on tree structure).
2014          */
2015         final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,
2016                                   boolean movable) {
2017             int n;
2018             if (tab == null || (n = tab.length) == 0)
2019                 return;
2020             int index = (n - 1) & hash;
2021             TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;
2022             TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;
2023             if (pred == null)
2024                 tab[index] = first = succ;
2025             else
2026                 pred.next = succ;
2027             if (succ != null)
2028                 succ.prev = pred;
2029             if (first == null)
2030                 return;
2031             if (root.parent != null)
2032                 root = root.root();
2033             if (root == null || root.right == null ||
2034                 (rl = root.left) == null || rl.left == null) {
2035                 tab[index] = first.untreeify(map);  // too small
2036                 return;
2037             }
2038             TreeNode<K,V> p = this, pl = left, pr = right, replacement;
2039             if (pl != null && pr != null) {
2040                 TreeNode<K,V> s = pr, sl;
2041                 while ((sl = s.left) != null) // find successor
2042                     s = sl;
2043                 boolean c = s.red; s.red = p.red; p.red = c; // swap colors
2044                 TreeNode<K,V> sr = s.right;
2045                 TreeNode<K,V> pp = p.parent;
2046                 if (s == pr) { // p was s's direct parent
2047                     p.parent = s;
2048                     s.right = p;
2049                 }
2050                 else {
2051                     TreeNode<K,V> sp = s.parent;
2052                     if ((p.parent = sp) != null) {
2053                         if (s == sp.left)
2054                             sp.left = p;
2055                         else
2056                             sp.right = p;
2057                     }
2058                     if ((s.right = pr) != null)
2059                         pr.parent = s;
2060                 }
2061                 p.left = null;
2062                 if ((p.right = sr) != null)
2063                     sr.parent = p;
2064                 if ((s.left = pl) != null)
2065                     pl.parent = s;
2066                 if ((s.parent = pp) == null)
2067                     root = s;
2068                 else if (p == pp.left)
2069                     pp.left = s;
2070                 else
2071                     pp.right = s;
2072                 if (sr != null)
2073                     replacement = sr;
2074                 else
2075                     replacement = p;
2076             }
2077             else if (pl != null)
2078                 replacement = pl;
2079             else if (pr != null)
2080                 replacement = pr;
2081             else
2082                 replacement = p;
2083             if (replacement != p) {
2084                 TreeNode<K,V> pp = replacement.parent = p.parent;
2085                 if (pp == null)
2086                     root = replacement;
2087                 else if (p == pp.left)
2088                     pp.left = replacement;
2089                 else
2090                     pp.right = replacement;
2091                 p.left = p.right = p.parent = null;
2092             }
2093 
2094             TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);
2095 
2096             if (replacement == p) {  // detach
2097                 TreeNode<K,V> pp = p.parent;
2098                 p.parent = null;
2099                 if (pp != null) {
2100                     if (p == pp.left)
2101                         pp.left = null;
2102                     else if (p == pp.right)
2103                         pp.right = null;
2104                 }
2105             }
2106             if (movable)
2107                 moveRootToFront(tab, r);
2108         }
2109 
2110         /**
2111          * Splits nodes in a tree bin into lower and upper tree bins,
2112          * or untreeifies if now too small. Called only from resize;
2113          * see above discussion about split bits and indices.
2114          *
2115          * @param map the map
2116          * @param tab the table for recording bin heads
2117          * @param index the index of the table being split
2118          * @param bit the bit of hash to split on
2119          */
2120         final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
2121             TreeNode<K,V> b = this;
2122             // Relink into lo and hi lists, preserving order
2123             TreeNode<K,V> loHead = null, loTail = null;
2124             TreeNode<K,V> hiHead = null, hiTail = null;
2125             int lc = 0, hc = 0;
2126             for (TreeNode<K,V> e = b, next; e != null; e = next) {
2127                 next = (TreeNode<K,V>)e.next;
2128                 e.next = null;
2129                 if ((e.hash & bit) == 0) {
2130                     if ((e.prev = loTail) == null)
2131                         loHead = e;
2132                     else
2133                         loTail.next = e;
2134                     loTail = e;
2135                     ++lc;
2136                 }
2137                 else {
2138                     if ((e.prev = hiTail) == null)
2139                         hiHead = e;
2140                     else
2141                         hiTail.next = e;
2142                     hiTail = e;
2143                     ++hc;
2144                 }
2145             }
2146 
2147             if (loHead != null) {
2148                 if (lc <= UNTREEIFY_THRESHOLD)
2149                     tab[index] = loHead.untreeify(map);
2150                 else {
2151                     tab[index] = loHead;
2152                     if (hiHead != null) // (else is already treeified)
2153                         loHead.treeify(tab);
2154                 }
2155             }
2156             if (hiHead != null) {
2157                 if (hc <= UNTREEIFY_THRESHOLD)
2158                     tab[index + bit] = hiHead.untreeify(map);
2159                 else {
2160                     tab[index + bit] = hiHead;
2161                     if (loHead != null)
2162                         hiHead.treeify(tab);
2163                 }
2164             }
2165         }
2166 
2167         /* ------------------------------------------------------------ */
2168         // Red-black tree methods, all adapted from CLR
2169 
2170         static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,
2171                                               TreeNode<K,V> p) {
2172             TreeNode<K,V> r, pp, rl;
2173             if (p != null && (r = p.right) != null) {
2174                 if ((rl = p.right = r.left) != null)
2175                     rl.parent = p;
2176                 if ((pp = r.parent = p.parent) == null)
2177                     (root = r).red = false;
2178                 else if (pp.left == p)
2179                     pp.left = r;
2180                 else
2181                     pp.right = r;
2182                 r.left = p;
2183                 p.parent = r;
2184             }
2185             return root;
2186         }
2187 
2188         static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,
2189                                                TreeNode<K,V> p) {
2190             TreeNode<K,V> l, pp, lr;
2191             if (p != null && (l = p.left) != null) {
2192                 if ((lr = p.left = l.right) != null)
2193                     lr.parent = p;
2194                 if ((pp = l.parent = p.parent) == null)
2195                     (root = l).red = false;
2196                 else if (pp.right == p)
2197                     pp.right = l;
2198                 else
2199                     pp.left = l;
2200                 l.right = p;
2201                 p.parent = l;
2202             }
2203             return root;
2204         }
2205 
2206         static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
2207                                                     TreeNode<K,V> x) {
2208             x.red = true;
2209             for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
2210                 if ((xp = x.parent) == null) {
2211                     x.red = false;
2212                     return x;
2213                 }
2214                 else if (!xp.red || (xpp = xp.parent) == null)
2215                     return root;
2216                 if (xp == (xppl = xpp.left)) {
2217                     if ((xppr = xpp.right) != null && xppr.red) {
2218                         xppr.red = false;
2219                         xp.red = false;
2220                         xpp.red = true;
2221                         x = xpp;
2222                     }
2223                     else {
2224                         if (x == xp.right) {
2225                             root = rotateLeft(root, x = xp);
2226                             xpp = (xp = x.parent) == null ? null : xp.parent;
2227                         }
2228                         if (xp != null) {
2229                             xp.red = false;
2230                             if (xpp != null) {
2231                                 xpp.red = true;
2232                                 root = rotateRight(root, xpp);
2233                             }
2234                         }
2235                     }
2236                 }
2237                 else {
2238                     if (xppl != null && xppl.red) {
2239                         xppl.red = false;
2240                         xp.red = false;
2241                         xpp.red = true;
2242                         x = xpp;
2243                     }
2244                     else {
2245                         if (x == xp.left) {
2246                             root = rotateRight(root, x = xp);
2247                             xpp = (xp = x.parent) == null ? null : xp.parent;
2248                         }
2249                         if (xp != null) {
2250                             xp.red = false;
2251                             if (xpp != null) {
2252                                 xpp.red = true;
2253                                 root = rotateLeft(root, xpp);
2254                             }
2255                         }
2256                     }
2257                 }
2258             }
2259         }
2260 
2261         static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,
2262                                                    TreeNode<K,V> x) {
2263             for (TreeNode<K,V> xp, xpl, xpr;;)  {
2264                 if (x == null || x == root)
2265                     return root;
2266                 else if ((xp = x.parent) == null) {
2267                     x.red = false;
2268                     return x;
2269                 }
2270                 else if (x.red) {
2271                     x.red = false;
2272                     return root;
2273                 }
2274                 else if ((xpl = xp.left) == x) {
2275                     if ((xpr = xp.right) != null && xpr.red) {
2276                         xpr.red = false;
2277                         xp.red = true;
2278                         root = rotateLeft(root, xp);
2279                         xpr = (xp = x.parent) == null ? null : xp.right;
2280                     }
2281                     if (xpr == null)
2282                         x = xp;
2283                     else {
2284                         TreeNode<K,V> sl = xpr.left, sr = xpr.right;
2285                         if ((sr == null || !sr.red) &&
2286                             (sl == null || !sl.red)) {
2287                             xpr.red = true;
2288                             x = xp;
2289                         }
2290                         else {
2291                             if (sr == null || !sr.red) {
2292                                 if (sl != null)
2293                                     sl.red = false;
2294                                 xpr.red = true;
2295                                 root = rotateRight(root, xpr);
2296                                 xpr = (xp = x.parent) == null ?
2297                                     null : xp.right;
2298                             }
2299                             if (xpr != null) {
2300                                 xpr.red = (xp == null) ? false : xp.red;
2301                                 if ((sr = xpr.right) != null)
2302                                     sr.red = false;
2303                             }
2304                             if (xp != null) {
2305                                 xp.red = false;
2306                                 root = rotateLeft(root, xp);
2307                             }
2308                             x = root;
2309                         }
2310                     }
2311                 }
2312                 else { // symmetric
2313                     if (xpl != null && xpl.red) {
2314                         xpl.red = false;
2315                         xp.red = true;
2316                         root = rotateRight(root, xp);
2317                         xpl = (xp = x.parent) == null ? null : xp.left;
2318                     }
2319                     if (xpl == null)
2320                         x = xp;
2321                     else {
2322                         TreeNode<K,V> sl = xpl.left, sr = xpl.right;
2323                         if ((sl == null || !sl.red) &&
2324                             (sr == null || !sr.red)) {
2325                             xpl.red = true;
2326                             x = xp;
2327                         }
2328                         else {
2329                             if (sl == null || !sl.red) {
2330                                 if (sr != null)
2331                                     sr.red = false;
2332                                 xpl.red = true;
2333                                 root = rotateLeft(root, xpl);
2334                                 xpl = (xp = x.parent) == null ?
2335                                     null : xp.left;
2336                             }
2337                             if (xpl != null) {
2338                                 xpl.red = (xp == null) ? false : xp.red;
2339                                 if ((sl = xpl.left) != null)
2340                                     sl.red = false;
2341                             }
2342                             if (xp != null) {
2343                                 xp.red = false;
2344                                 root = rotateRight(root, xp);
2345                             }
2346                             x = root;
2347                         }
2348                     }
2349                 }
2350             }
2351         }
2352 
2353         /**
2354          * Recursive invariant check
2355          */
2356         static <K,V> boolean checkInvariants(TreeNode<K,V> t) {
2357             TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,
2358                 tb = t.prev, tn = (TreeNode<K,V>)t.next;
2359             if (tb != null && tb.next != t)
2360                 return false;
2361             if (tn != null && tn.prev != t)
2362                 return false;
2363             if (tp != null && t != tp.left && t != tp.right)
2364                 return false;
2365             if (tl != null && (tl.parent != t || tl.hash > t.hash))
2366                 return false;
2367             if (tr != null && (tr.parent != t || tr.hash < t.hash))
2368                 return false;
2369             if (t.red && tl != null && tl.red && tr != null && tr.red)
2370                 return false;
2371             if (tl != null && !checkInvariants(tl))
2372                 return false;
2373             if (tr != null && !checkInvariants(tr))
2374                 return false;
2375             return true;
2376         }
2377     }
2378 
2379 }
jdk1.8中的HashMap源码

2.3、HashMap的基本思想

 2.3.1、 确定哈希桶数组索引位置

      不管增加、删除、查找键值对,定位到哈希桶数组的位置都是很关键的第一步。前面说过HashMap的数据结构是数组和链表(链地址法)的结合,所以我们当然希望这个HashMap里面的元素位置尽量分布均匀些,尽量使得每个位置上的元素数量只有一个,那么当我们用hash算法求得这个位置的时候,马上就可以知道对应位置的元素就是我们要的,不用遍历链表,大大优化了查询的效率。HashMap定位数组索引位置,直接决定了hash方法的离散性能。先看看源码的实现:

 1 方法一:
 2 static final int hash(Object key) {   //jdk1.8 & jdk1.7
 3      int h;
 4      // h = key.hashCode() 为第一步 取hashCode值
 5      // h ^ (h >>> 16)  为第二步 高位参与运算
 6      return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
 7 }
 8 方法二:
 9 static int indexFor(int h, int length) {  
10     //jdk1.7的源码,jdk1.8没有这个方法,但是实现原理一样的
11      return h & (length-1);  //第三步 取模运算
12 }

    这里的Hash算法本质上就是三步:取key的hashCode值、高位运算、取模运算。

    对于任意给定的对象,只要它的hashCode()返回值相同,那么程序调用方法一所计算得到的Hash码值总是相同的。我们首先想到的就是把hash值对数组长度取模运算,这样一来,元素的分布相对来说是比较均匀的。但是,模运算的消耗还是比较大的,在HashMap中是这样做的:调用方法二来计算该对象应该保存在table数组的哪个索引处。这个方法非常巧妙,它通过h & (table.length -1)来得到该对象的保存位,而HashMap底层数组的长度总是2的n次方,这是HashMap在速度上的优化。当length总是2的n次方时,h& (length-1)运算等价于对length取模,也就是h%length,但是&比%具有更高的效率。
    在JDK1.8的实现中,优化了高位运算的算法,通过hashCode()的高16位异或低16位实现的:(h = k.hashCode()) ^ (h >>> 16),主要是从速度、功效、质量来考虑的,这么做可以在数组table的length比较小的时候,也能保证考虑到高低bit都参与到Hash的计算中,同时不会有太大的开销。

 

 2.3.2、jdk1.8中的HashMap的put方法

 2.3.3、扩容机制

     扩容(resize)就是重新计算容量,向HashMap对象里不停的添加元素,而HashMap对象内部的数组无法装载更多的元素时,对象就需要扩大数组的长度,以便能装入更多的元素。当然Java里的数组是无法自动扩容的,方法是使用一个新的数组代替已有的容量小的数组。
     我们分析下resize的源码,鉴于JDK1.8融入了红黑树,较复杂,为了便于理解我们仍然使用JDK1.7的代码,好理解一些,本质上区别不大,具体区别后文再说。

 1 void resize(int newCapacity) {   //传入新的容量
 2      Entry[] oldTable = table;    //引用扩容前的Entry数组
 3      int oldCapacity = oldTable.length;         
 4      if (oldCapacity == MAXIMUM_CAPACITY) {  //扩容前的数组大小如果已经达到最大(2^30)了
 5          threshold = Integer.MAX_VALUE; //修改阈值为int的最大值(2^31-1),这样以后就不会扩容了
 6          return;
 7     }
 8 
 9      Entry[] newTable = new Entry[newCapacity];  //初始化一个新的Entry数组
10      transfer(newTable);                         //!!将数据转移到新的Entry数组里
11      table = newTable;                           //HashMap的table属性引用新的Entry数组
12      threshold = (int)(newCapacity * loadFactor);//修改阈值
13 }

    这里就是使用一个容量更大的数组来代替已有的容量小的数组,transfer()方法将原有Entry数组的元素拷贝到新的Entry数组里。

 1 void transfer(Entry[] newTable) {
 2     Entry[] src = table;                   //src引用了旧的Entry数组
 3      int newCapacity = newTable.length;
 4     for (int j = 0; j < src.length; j++) { //遍历旧的Entry数组
 5          Entry<K,V> e = src[j];             //取得旧Entry数组的每个元素
 6          if (e != null) {
 7              src[j] = null;//释放旧Entry数组的对象引用(for循环后,旧的Entry数组不再引用任何对象)
 8              do {
 9                  Entry<K,V> next = e.next;
10                  int i = indexFor(e.hash, newCapacity); //!!重新计算每个元素在数组中的位置
11                  e.next = newTable[i];
12                  newTable[i] = e;      //将元素放在数组上
13                  e = next;             //访问下一个Entry链上的元素
14              } while (e != null);
15          }
16      }
17 }

    newTable[i]的引用赋给了e.next,也就是使用了单链表的头插入方式,同一位置上新元素总会被放在链表的头部位置;这样先放在一个索引上的元素终会被放到Entry链的尾部(如果发生了hash冲突的话),这一点和Jdk1.8有区别,下文详解。在旧数组中同一条Entry链上的元素,通过重新计算索引位置后,有可能被放到了新数组的不同位置上。
    下面举个例子说明下扩容过程。假设了我们的hash算法就是简单的用key mod 一下表的大小(也就是数组的长度)。其中的哈希桶数组table的size=2, 所以key = 3、7、5,put顺序依次为 5、7、3。在mod 2以后都冲突在table[1]这里了。这里假设负载因子 loadFactor=1,即当键值对的实际大小size 大于 table的实际大小时进行扩容。接下来的三个步骤是哈希桶数组 resize成4,然后所有的Node重新rehash的过程。

 

    下面我们讲解下JDK1.8做的优化。经过观测可以发现,我们使用的是2次幂的扩展(指长度扩为原来2倍),所以,元素的位置要么是在原位置,要么是在原位置再移动2次幂的位置。下图n为table的长度,图(a)表示扩容前的key1和key2两种key确定索引位置的示例,图(b)表示扩容后key1和key2两种key确定索引位置的示例,其中hash1是key1对应的哈希与高位运算结果。

    元素在重新计算hash之后,因为n变为2倍,那么n-1的mask范围在高位多1bit(红色),因此新的index就会发生这样的变化:

    因此,我们在扩充HashMap的时候,不需要像JDK1.7的实现那样重新计算hash,只需要看看原来的hash值新增的那个bit是1还是0就好了,是0的话索引没变,是1的话索引变成“原索引+oldCap”,可以看看下图为16扩充为32的resize示意图:

 

    这个设计确实非常的巧妙,既省去了重新计算hash值的时间,而且同时,由于新增的1bit是0还是1可以认为是随机的,因此resize的过程,均匀的把之前的冲突的节点分散到新的bucket了。这一块就是JDK1.8新增的优化点。有一点注意区别,JDK1.7中rehash的时候,旧链表迁移新链表的时候,如果在新表的数组索引位置相同,则链表元素会倒置,但是JDK1.8不会倒置。

1 (1) 扩容是一个特别耗性能的操作,所以当程序员在使用HashMap的时候,估算map的大小,初始化的时候给一个大致的数值,避免map进行频繁的扩容。
2 (2) 负载因子是可以修改的,也可以大于1,但是建议不要轻易修改,除非情况非常特殊。
3 (3) HashMap是线程不安全的,不要在并发的环境中同时操作HashMap,建议使用ConcurrentHashMap。
4 (4) JDK1.8引入红黑树大程度优化了HashMap的性能。

三、ConcurrentHashMap的介绍

      HashMap是非线程安全的,这意味着不应该在多个线程中对这些Map进行修改操作,轻则会产生数据不一致的问题,甚至还会因为并发插入元素而导致链表成环(插入会触发扩容,而扩容操作需要将原数组中的元素rehash到新数组,这时并发操作就有可能产生链表的循环引用从而成环),这样在查找时就会发生死循环,影响到整个应用程序。
     Collections.synchronizedMap(Map<K,V> m)可以将一个Map转换成线程安全的实现,其实也就是通过一个包装类,然后把所有功能都委托给传入的Map实现,而且包装类是基于synchronized关键字来保证线程安全的(Hashtable也是基于synchronized关键字),底层使用的是互斥锁(同一时间内只能由持有锁的线程访问,其他竞争线程进入睡眠状态),性能与吞吐量比较低。

 1 public static <K,V> Map<K,V> synchronizedMap(Map<K,V> m) {
 2     return new SynchronizedMap<>(m);
 3 }
 4 private static class SynchronizedMap<K,V>
 5     implements Map<K,V>, Serializable {
 6     private static final long serialVersionUID = 1978198479659022715L;
 7     private final Map<K,V> m;     // Backing Map
 8     final Object      mutex;        // Object on which to synchronize
 9     SynchronizedMap(Map<K,V> m) {
10         this.m = Objects.requireNonNull(m);
11         mutex = this;
12     }
13     SynchronizedMap(Map<K,V> m, Object mutex) {
14         this.m = m;
15         this.mutex = mutex;
16     }
17     public int size() {
18         synchronized (mutex) {return m.size();}
19     }
20     public boolean isEmpty() {
21         synchronized (mutex) {return m.isEmpty();}
22     }
23     ............
24 }

     然而ConcurrentHashMap的实现细节远没有这么简单,因此性能也要高上许多。它没有使用一个全局锁来锁住自己,而是采用了减少锁粒度的方法,尽量减少因为竞争锁而导致的阻塞与冲突,而且ConcurrentHashMap的检索操作是不需要锁的。
     在Java 7中,ConcurrentHashMap把内部细分成了若干个小的HashMap,称之为段(Segment),默认被分为16个段。对于一个写操作而言,会先根据hash code进行寻址,得出该Entry应被存放在哪一个Segment,然后只要对该Segment加锁即可。理想情况下,一个默认的ConcurrentHashMap可以同时接受16个线程进行写操作(如果都是对不同Segment进行操作的话)。分段锁对于size()这样的全局操作来说就没有任何作用了,想要得出Entry的数量就需要遍历所有Segment,获得所有的锁,然后再统计总数。事实上,ConcurrentHashMap会先试图使用无锁的方式统计总数,这个尝试会进行3次,如果在相邻的2次计算中获得的Segment的modCount次数一致,代表这两次计算过程中都没有发生过修改操作,那么就可以当做最终结果返回,否则,就要获得所有Segment的锁,重新计算size。


     而Java 8的ConcurrentHashMap,它与Java 7的实现差别较大。完全放弃了段的设计,而是变回与HashMap相似的设计,使用buckets数组与分离链接法(同样会在超过阈值时树化,对于构造红黑树的逻辑与HashMap差别不大,只不过需要额外使用CAS来保证线程安全),锁的粒度也被细分到每个数组元素(因为HashMap在Java 8中也实现了不少优化,即使碰撞严重,也能保证一定的性能,而且Segment不仅臃肿还有弱一致性的问题存在),所以它的并发级别与数组长度相关(Java 7则是与段数相关)。

 3.1、ConcurrentHashMap散列函数

     ConcurrentHashMap的散列函数与HashMap并没有什么区别,同样是把key的hash code的高16位与低16位进行异或运算(因为ConcurrentHashMap的buckets数组长度也永远是一个2的N次方),然后将扰乱后的hash code与数组的长度减一(实际可访问到的最大索引)进行与运算,得出的结果即是目标所在的位置。    

1 // 2^31 - 1,int类型的最大值
2 // 该掩码表示节点hash的可用位,用来保证hash永远为一个正整数
3 static final int HASH_BITS = 0x7fffffff;
4 static final int spread(int h) {
5     return (h ^ (h >>> 16)) & HASH_BITS;
6 }

 3.2、查找操作

    下面是查找操作的源码:

 1 public V get(Object key) {
 2     Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
 3     int h = spread(key.hashCode());
 4     if ((tab = table) != null && (n = tab.length) > 0 &&
 5         (e = tabAt(tab, (n - 1) & h)) != null) {
 6         if ((eh = e.hash) == h) {
 7             // 先尝试判断链表头是否为目标,如果是就直接返回
 8             if ((ek = e.key) == key || (ek != null && key.equals(ek)))
 9                 return e.val;
10         }
11         else if (eh < 0)
12             // eh < 0代表这是一个特殊节点(TreeBin或ForwardingNode)
13             // 所以直接调用find()进行遍历查找
14             return (p = e.find(h, key)) != null ? p.val : null;
15            // 遍历链表
16         while ((e = e.next) != null) {
17             if (e.hash == h &&
18                 ((ek = e.key) == key || (ek != null && key.equals(ek))))
19                 return e.val;
20         }
21     }
22     return null;
23 }

    一个普通的节点(链表节点)的hash不可能小于0(已经在spread()函数中修正过了),所以小于0的只可能是一个特殊节点,它不能用while循环中遍历链表的方式来进行遍历。TreeBin是红黑树的头部节点(红黑树的节点为TreeNode),它本身不含有key与value,而是指向一个TreeNode节点的链表与它们的根节点,同时使用CAS实现了一个读写锁,迫使Writer(持有这个锁)在树重构操作之前等待Reader完成。ForwardingNode是一个在数据转移过程(由扩容引起)中使用的临时节点,它会被插入到头部。它与TreeBin(和TreeNode)都是Node类的子类,为了判断出哪些是特殊节点,TreeBin和ForwardingNode的hash域都只是一个虚拟值:

 1 static class Node<K,V> implements Map.Entry<K,V> {
 2     final int hash;
 3     final K key;
 4     volatile V val;
 5     volatile Node<K,V> next;
 6     Node(int hash, K key, V val, Node<K,V> next) {
 7         this.hash = hash;
 8         this.key = key;
 9         this.val = val;
10         this.next = next;
11     }
12     public final V setValue(V value) {
13         throw new UnsupportedOperationException();
14     }
15     ......
16     /**
17      * Virtualized support for map.get(); overridden in subclasses.
18      */
19     Node<K,V> find(int h, Object k) {
20         Node<K,V> e = this;
21         if (k != null) {
22             do {
23                 K ek;
24                 if (e.hash == h &&
25                     ((ek = e.key) == k || (ek != null && k.equals(ek))))
26                     return e;
27             } while ((e = e.next) != null);
28         }
29         return null;
30     }
31 }
32 /*
33  * Encodings for Node hash fields. See above for explanation.
34  */
35 static final int MOVED     = -1; // hash for forwarding nodes
36 static final int TREEBIN   = -2; // hash for roots of trees
37 static final int RESERVED  = -3; // hash for transient reservations    
38 static final class TreeBin<K,V> extends Node<K,V> {
39     ....
40     TreeBin(TreeNode<K,V> b) {
41         super(TREEBIN, null, null, null);
42         ....
43     }   
44      
45     ....     
46 }
47 static final class ForwardingNode<K,V> extends Node<K,V> {
48     final Node<K,V>[] nextTable;
49     ForwardingNode(Node<K,V>[] tab) {
50         super(MOVED, null, null, null);
51         this.nextTable = tab;
52     }
53     .....
54 }

  我们在get()函数中并没有发现任何与锁相关的代码,那么它是怎么保证线程安全的呢?一个操作ConcurrentHashMap.get("a"),它的步骤基本分为以下几步:

1  根据散列函数计算出的索引访问table。
2  从table中取出头节点。
3  遍历头节点直到找到目标节点。
4  从目标节点中取出value并返回。

    所以只要保证访问table与节点的操作总是能够返回最新的数据就可以了。ConcurrentHashMap并没有采用锁的方式,而是通过volatile关键字来保证它们的可见性。在代码中可以发现,table、Node.val和Node.next都是被volatile关键字所修饰的。

  volatile关键字保证了多线程环境下变量的可见性与有序性,底层实现基于内存屏障(Memory Barrier)。为了优化性能,现代CPU工作时的指令执行顺序与应用程序的代码顺序其实是不一致的(有些编译器也会进行这种优化),也就是所谓的乱序执行技术。乱序执行可以提高CPU流水线的工作效率,只要保证数据符合程序逻辑上的正确性即可(遵循happens-before原则)。不过如今是多核时代,如果随便乱序而不提供防护措施那是会出问题的。每一个cpu上都会进行乱序优化,单cpu所保证的逻辑次序可能会被其他cpu所破坏。内存屏障就是针对此情况的防护措施。可以认为它是一个同步点(但它本身也是一条cpu指令)。例如在IA32指令集架构中引入的SFENCE指令,在该指令之前的所有写操作必须全部完成,读操作仍可以乱序执行。LFENCE指令则保证之前的所有读操作必须全部完成,另外还有粒度更粗的MFENCE指令保证之前的所有读写操作都必须全部完成。内存屏障就像是一个保护指令顺序的栅栏,保护后面的指令不被前面的指令跨越。将内存屏障插入到写操作与读操作之间,就可以保证之后的读操作可以访问到最新的数据,因为屏障前的写操作已经把数据写回到内存(根据缓存一致性协议,不会直接写回到内存,而是改变该cpu私有缓存中的状态,然后通知给其他cpu这个缓存行已经被修改过了,之后另一个cpu在读操作时就可以发现该缓存行已经是无效的了,这时它会从其他cpu中读取最新的缓存行,然后之前的cpu才会更改状态并写回到内存)。
  例如,读一个被volatile修饰的变量V总是能够从JMM(Java Memory Model)主内存中获得最新的数据。因为内存屏障的原因,每次在使用变量V(通过JVM指令use,后面说的也都是JVM中的指令而不是cpu)之前都必须先执行load指令(把从主内存中得到的数据放入到工作内存),根据JVM的规定,load指令必须发生在read指令(从主内存中读取数据)之后,所以每次访问变量V都会先从主内存中读取。相对的,写操作也因为内存屏障保证的指令顺序,每次都会直接写回到主内存。不过volatile关键字并不能保证操作的原子性,对该变量进行并发的连续操作是非线程安全的,所幸ConcurrentHashMap只是用来确保访问到的变量是最新的,所以也不会发生什么问题。

    出于性能考虑,Doug Lea(java.util.concurrent包的作者)直接通过Unsafe类来对table进行操作。Java号称是安全的编程语言,而保证安全的代价就是牺牲程序员自由操控内存的能力。像在C/C++中可以通过操作指针变量达到操作内存的目的(其实操作的是虚拟地址),但这种灵活性在新手手中也经常会带来一些愚蠢的错误,比如内存访问越界。Unsafe从字面意思可以看出是不安全的,它包含了许多本地方法(在JVM平台上运行的其他语言编写的程序,主要为C/C++,由JNI实现),这些方法支持了对指针的操作,所以它才被称为是不安全的。虽然不安全,但毕竟是由C/C++实现的,像一些与操作系统交互的操作肯定是快过Java的,毕竟Java与操作系统之间还隔了一层抽象(JVM),不过代价就是失去了JVM所带来的多平台可移植性(本质上也只是一个c/cpp文件,如果换了平台那就要重新编译)。
    对table进行操作的函数有以下三个,都使用到了Unsafe(在java.util.concurrent包随处可见)

 1 @SuppressWarnings("unchecked")
 2 static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i) {
 3     // 从tab数组中获取一个引用,遵循Volatile语义
 4     // 参数2是一个在tab中的偏移量,用来寻找目标对象
 5     return (Node<K,V>)U.getObjectVolatile(tab, ((long)i << ASHIFT) + ABASE);
 6 }
 7 static final <K,V> boolean casTabAt(Node<K,V>[] tab, int i,
 8                                     Node<K,V> c, Node<K,V> v) {
 9     // 通过CAS操作将tab数组中位于参数2偏移量位置的值替换为v
10     // c是期望值,如果期望值与实际值不符,返回false
11     // 否则,v会成功地被设置到目标位置,返回true
12     return U.compareAndSwapObject(tab, ((long)i << ASHIFT) + ABASE, c, v);
13 }
14 static final <K,V> void setTabAt(Node<K,V>[] tab, int i, Node<K,V> v) {
15     // 设置tab数组中位于参数2偏移量位置的值,遵循Volatile语义
16     U.putObjectVolatile(tab, ((long)i << ASHIFT) + ABASE, v);
17 }
18     初始化:ConcurrentHashMap与HashMap一样是Lazy的,buckets数组会在第一次访问put()函数时进行初始化,它的默认构造函数甚至是个空函数。    
19 /**
20  * Creates a new, empty map with the default initial table size (16).
21  */
22 public ConcurrentHashMap() {
23 }

     但是有一点需要注意,ConcurrentHashMap是工作在多线程并发环境下的,如果有多个线程同时调用了put()函数该怎么办?这会导致重复初始化,所以必须要有对应的防护措施。ConcurrentHashMap声明了一个用于控制table的初始化与扩容的实例变量sizeCtl,默认值为0。当它是一个负数的时候,代表table正处于初始化或者扩容的状态。-1表示table正在进行初始化,-N则表示当前有N-1个线程正在进行扩容。在其他情况下,如果table还未初始化(table == null),sizeCtl表示table进行初始化的数组大小(所以从构造函数传入的initialCapacity在经过计算后会被赋给它)。如果table已经初始化过了,则表示下次触发扩容操作的阈值,算法stzeCtl = n - (n >>> 2),也就是n的75%,与默认负载因子(0.75)的HashMap一致。

    private transient volatile int sizeCtl;

    初始化table的操作位于函数initTable(),源码如下:

 1 /**
 2  * Initializes table, using the size recorded in sizeCtl.
 3  */
 4 private final Node<K,V>[] initTable() {
 5     Node<K,V>[] tab; int sc;
 6     while ((tab = table) == null || tab.length == 0) {
 7         // sizeCtl小于0,这意味着已经有其他线程进行初始化了
 8         // 所以当前线程让出CPU时间片
 9         if ((sc = sizeCtl) < 0)
10             Thread.yield(); // lost initialization race; just spin
11         // 否则,通过CAS操作尝试修改sizeCtl
12         else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
13             try {
14                 if ((tab = table) == null || tab.length == 0) {
15                     // 默认构造函数,sizeCtl = 0,使用默认容量(16)进行初始化
16                     // 否则,会根据sizeCtl进行初始化
17                     int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
18                     @SuppressWarnings("unchecked")
19                     Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
20                     table = tab = nt;
21                     // 计算阈值,n的75%
22                     sc = n - (n >>> 2);
23                 }
24             } finally {
25                 // 阈值赋给sizeCtl
26                 sizeCtl = sc;
27             }
28             break;
29         }
30     }
31     return tab;
32 }

    sizeCtl是一个volatile变量,只要有一个线程CAS操作成功,sizeCtl就会被暂时地修改为-1,这样其他线程就能够根据sizeCtl得知table是否已经处于初始化状态中,最后sizeCtl会被设置成阈值,用于触发扩容操作。

 3.3、扩容

    ConcurrentHashMap触发扩容的时机与HashMap类似,要么是在将链表转换成红黑树时判断table数组的长度是否小于阈值(64),如果小于就进行扩容而不是树化,要么就是在添加元素的时候,判断当前Entry数量是否超过阈值,如果超过就进行扩容。

 1 private final void treeifyBin(Node<K,V>[] tab, int index) {
 2     Node<K,V> b; int n, sc;
 3     if (tab != null) {
 4         // 小于MIN_TREEIFY_CAPACITY,进行扩容
 5         if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
 6             tryPresize(n << 1);
 7         else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
 8             synchronized (b) {
 9                 // 将链表转换成红黑树...
10             }
11         }
12     }
13 }
14 ...
15 final V putVal(K key, V value, boolean onlyIfAbsent) {
16     ...
17     addCount(1L, binCount); // 计数
18     return null;
19 }
20 private final void addCount(long x, int check) {
21     // 计数...
22     if (check >= 0) {
23         Node<K,V>[] tab, nt; int n, sc;
24         // s(元素个数)大于等于sizeCtl,触发扩容
25         while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
26                (n = tab.length) < MAXIMUM_CAPACITY) {
27             // 扩容标志位
28             int rs = resizeStamp(n);
29             // sizeCtl为负数,代表正有其他线程进行扩容
30             if (sc < 0) {
31                 // 扩容已经结束,中断循环
32                 if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
33                     sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
34                     transferIndex <= 0)
35                     break;
36                 // 进行扩容,并设置sizeCtl,表示扩容线程 + 1
37                 if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
38                     transfer(tab, nt);
39             }
40             // 触发扩容(第一个进行扩容的线程)
41             // 并设置sizeCtl告知其他线程
42             else if (U.compareAndSwapInt(this, SIZECTL, sc,
43                                          (rs << RESIZE_STAMP_SHIFT) + 2))
44                 transfer(tab, null);
45             // 统计个数,用于循环检测是否还需要扩容
46             s = sumCount();
47         }
48     }
49 }
扩容代码

    可以看到有关sizeCtl的操作牵涉到了大量的位运算,我们先来理解这些位运算的意义。首先是resizeStamp(),该函数返回一个用于数据校验的标志位,意思是对长度为n的table进行扩容。它将n的前导零(最高有效位之前的零的数量)和1 << 15做或运算,这时低16位的最高位为1,其他都为n的前导零。

1 static final int resizeStamp(int n) {
2     // RESIZE_STAMP_BITS = 16
3     return Integer.numberOfLeadingZeros(n) | (1 << (RESIZE_STAMP_BITS - 1));
4 }

    初始化sizeCtl(扩容操作被第一个线程首次进行)的算法为(rs << RESIZE_STAMP_SHIFT) + 2,首先RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS = 16,那么rs << 16等于将这个标志位移动到了高16位,这时最高位为1,所以sizeCtl此时是个负数,然后加二(至于为什么是2,还记得有关sizeCtl的说明吗?1代表初始化状态,所以实际的线程个数是要减去1的)代表当前有一个线程正在进行扩容,这样sizeCtl就被分割成了两部分,高16位是一个对n的数据校验的标志位,低16位表示参与扩容操作的线程个数 + 1。可能会有读者有所疑惑,更新进行扩容的线程数量的操作为什么是sc + 1而不是sc - 1,这是因为对sizeCtl的操作都是基于位运算的,所以不会关心它本身的数值是多少,只关心它在二进制上的数值,而sc + 1会在低16位上加1。


    tryPresize()函数跟addCount()的后半段逻辑类似,不断地根据sizeCtl判断当前的状态,然后选择对应的策略。

 1 private final void tryPresize(int size) {
 2     // 对size进行修正
 3     int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY :
 4         tableSizeFor(size + (size >>> 1) + 1);
 5     int sc;
 6     // sizeCtl是默认值或正整数
 7     // 代表table还未初始化
 8     // 或还没有其他线程正在进行扩容
 9     while ((sc = sizeCtl) >= 0) {
10         Node<K,V>[] tab = table; int n;
11         if (tab == null || (n = tab.length) == 0) {
12             n = (sc > c) ? sc : c;
13             // 设置sizeCtl,告诉其他线程,table现在正处于初始化状态
14             if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
15                 try {
16                     if (table == tab) {
17                         @SuppressWarnings("unchecked")
18                         Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
19                         table = nt;
20                         // 计算下次触发扩容的阈值
21                         sc = n - (n >>> 2);
22                     }
23                 } finally {
24                     // 将阈值赋给sizeCtl
25                     sizeCtl = sc;
26                 }
27             }
28         }
29         // 没有超过阈值或者大于容量的上限,中断循环
30         else if (c <= sc || n >= MAXIMUM_CAPACITY)
31             break;
32         // 进行扩容,与addCount()后半段的逻辑一致
33         else if (tab == table) {
34             int rs = resizeStamp(n);
35             if (sc < 0) {
36                 Node<K,V>[] nt;
37                 if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
38                     sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
39                     transferIndex <= 0)
40                     break;
41                 if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
42                     transfer(tab, nt);
43             }
44             else if (U.compareAndSwapInt(this, SIZECTL, sc,
45                                          (rs << RESIZE_STAMP_SHIFT) + 2))
46                 transfer(tab, null);
47         }
48     }
49 }
tryPresize

      扩容操作的核心在于数据的转移,在单线程环境下数据的转移很简单,无非就是把旧数组中的数据迁移到新的数组。但是这在多线程环境下是行不通的,需要保证线程安全性,在扩容的时候其他线程也可能正在添加元素,这时又触发了扩容怎么办?有人可能会说,用一个互斥锁把数据转移操作的过程锁住不就好了?这确实是一种可行的解决方法,但同样也会带来极差的吞吐量。互斥锁会导致所有访问临界区的线程陷入阻塞状态,这会消耗额外的系统资源,内核需要保存这些线程的上下文并放到阻塞队列,持有锁的线程耗时越长,其他竞争线程就会一直被阻塞,因此吞吐量低下,导致响应时间缓慢。而且锁总是会伴随着死锁问题,一旦发生死锁,整个应用程序都会因此受到影响,所以加锁永远是最后的备选方案。Doug Lea没有选择直接加锁,而是基于CAS实现无锁的并发同步策略,令人佩服的是他不仅没有把其他线程拒之门外,甚至还邀请它们一起来协助工作。那么如何才能让多个线程协同工作呢?Doug Lea把整个table数组当做多个线程之间共享的任务队列,然后只需维护一个指针,当有一个线程开始进行数据转移,就会先移动指针,表示指针划过的这片bucket区域由该线程负责。这个指针被声明为一个volatile整型变量,它的初始位置位于table的尾部,即它等于table.length,很明显这个任务队列是逆向遍历的。

 1 /**
 2  * The next table index (plus one) to split while resizing.
 3  */
 4 private transient volatile int transferIndex;
 5 /**
 6  * 一个线程需要负责的最小bucket数
 7  */
 8 private static final int MIN_TRANSFER_STRIDE = 16;
 9      
10 /**
11  * The next table to use; non-null only while resizing.
12  */
13 private transient volatile Node<K,V>[] nextTable;

    一个已经迁移完毕的bucket会被替换成ForwardingNode节点,用来标记此bucket已经被其他线程迁移完毕了。ForwardingNode是一个特殊节点,可以通过hash域的虚拟值来识别它,它同样重写了find()函数,用来在新数组中查找目标。数据迁移的操作位于transfer()函数,多个线程之间依靠sizeCtl与transferIndex指针来协同工作,每个线程都有自己负责的区域,一个完成迁移的bucket会被设置为ForwardingNode,其他线程遇见这个特殊节点就跳过该bucket,处理下一个bucket。transfer()函数可以大致分为三部分,第一部分对后续需要使用的变量进行初始化:

 1 /**
 2  * Moves and/or copies the nodes in each bin to new table. See
 3  * above for explanation.
 4  */
 5 private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
 6     int n = tab.length, stride;
 7     // 根据当前机器的CPU数量来决定每个线程负责的bucket数
 8     // 避免因为扩容线程过多,反而影响到性能
 9     if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
10         stride = MIN_TRANSFER_STRIDE; // subdivide range
11     // 初始化nextTab,容量为旧数组的一倍
12     if (nextTab == null) {            // initiating
13         try {
14             @SuppressWarnings("unchecked")
15             Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
16             nextTab = nt;
17         } catch (Throwable ex) {      // try to cope with OOME
18             sizeCtl = Integer.MAX_VALUE;
19             return;
20         }
21         nextTable = nextTab;
22         transferIndex = n; // 初始化指针
23     }
24     int nextn = nextTab.length;
25     ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
26     boolean advance = true;
27     boolean finishing = false; // to ensure sweep before committing nextTab
View Code

    第二部分为当前线程分配任务和控制当前线程的任务进度,这部分是transfer()的核心逻辑,描述了如何与其他线程协同工作:

 1 // i指向当前bucket,bound表示当前线程所负责的bucket区域的边界
 2 for (int i = 0, bound = 0;;) {
 3     Node<K,V> f; int fh;
 4     // 这个循环使用CAS不断尝试为当前线程分配任务
 5     // 直到分配成功或任务队列已经被全部分配完毕
 6     // 如果当前线程已经被分配过bucket区域
 7     // 那么会通过--i指向下一个待处理bucket然后退出该循环
 8     while (advance) {
 9         int nextIndex, nextBound;
10         // --i表示将i指向下一个待处理的bucket
11         // 如果--i >= bound,代表当前线程已经分配过bucket区域
12         // 并且还留有未处理的bucket
13         if (--i >= bound || finishing)
14             advance = false;
15         // transferIndex指针 <= 0 表示所有bucket已经被分配完毕
16         else if ((nextIndex = transferIndex) <= 0) {
17             i = -1;
18             advance = false;
19         }
20         // 移动transferIndex指针
21         // 为当前线程设置所负责的bucket区域的范围
22         // i指向该范围的第一个bucket,注意i是逆向遍历的
23         // 这个范围为(bound, i),i是该区域最后一个bucket,遍历顺序是逆向的
24         else if (U.compareAndSwapInt
25                  (this, TRANSFERINDEX, nextIndex,
26                   nextBound = (nextIndex > stride ?
27                                nextIndex - stride : 0))) {
28             bound = nextBound;
29             i = nextIndex - 1;
30             advance = false;
31         }
32     }
33     // 当前线程已经处理完了所负责的所有bucket
34     if (i < 0 || i >= n || i + n >= nextn) {
35         int sc;
36         // 如果任务队列已经全部完成
37         if (finishing) {
38             nextTable = null;
39             table = nextTab;
40             // 设置新的阈值
41             sizeCtl = (n << 1) - (n >>> 1);
42             return;
43         }
44         // 工作中的扩容线程数量减1
45         if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
46             // (resizeStamp << RESIZE_STAMP_SHIFT) + 2代表当前有一个扩容线程
47             // 相对的,(sc - 2) !=  resizeStamp << RESIZE_STAMP_SHIFT
48             // 表示当前还有其他线程正在进行扩容,所以直接返回
49             if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
50                 return;
51             // 否则,当前线程就是最后一个进行扩容的线程
52             // 设置finishing标识
53             finishing = advance = true;
54             i = n; // recheck before commit
55         }
56     }
57     // 如果待处理bucket是空的
58     // 那么插入ForwardingNode,以通知其他线程
59     else if ((f = tabAt(tab, i)) == null)
60         advance = casTabAt(tab, i, null, fwd);
61     // 如果待处理bucket的头节点是ForwardingNode
62     // 说明此bucket已经被处理过了,跳过该bucket
63     else if ((fh = f.hash) == MOVED)
64         advance = true; // already processed
View Code

    最后一部分是具体的迁移过程(对当前指向的bucket),这部分的逻辑与HashMap类似,拿旧数组的容量当做一个掩码,然后与节点的hash进行与操作,可以得出该节点的新增有效位,如果新增有效位为0就放入一个链表A,如果为1就放入另一个链表B,链表A在新数组中的位置不变(跟在旧数组的索引一致),链表B在新数组中的位置为原索引加上旧数组容量。这个方法减少了rehash的计算量,而且还能达到均匀分布的目的。

 1 else {
 2     // 对于节点的操作还是要加上锁的
 3     // 不过这个锁的粒度很小,只锁住了bucket的头节点
 4     synchronized (f) {
 5         if (tabAt(tab, i) == f) {
 6             Node<K,V> ln, hn;
 7             // hash code不为负,代表这是条链表
 8             if (fh >= 0) {
 9                 // fh & n 获得hash code的新增有效位,用于将链表分离成两类
10                 // 要么是0要么是1,关于这个位运算的更多细节
11                 // 请看本文中有关HashMap扩容操作的解释
12                 int runBit = fh & n;
13                 Node<K,V> lastRun = f;
14                 // 这个循环用于记录最后一段连续的同一类节点
15                 // 这个类别是通过fh & n来区分的
16                 // 这段连续的同类节点直接被复用,不会产生额外的复制
17                 for (Node<K,V> p = f.next; p != null; p = p.next) {
18                     int b = p.hash & n;
19                     if (b != runBit) {
20                         runBit = b;
21                         lastRun = p;
22                     }
23                 }
24                 // 0被放入ln链表,1被放入hn链表
25                 // lastRun是连续同类节点的起始节点
26                 if (runBit == 0) {
27                     ln = lastRun;
28                     hn = null;
29                 }
30                 else {
31                     hn = lastRun;
32                     ln = null;
33                 }
34                 // 将最后一段的连续同类节点之前的节点按类别复制到ln或hn
35                 // 链表的插入方向是往头部插入的,Node构造函数的第四个参数是next
36                 // 所以就算遇到类别与lastRun一致的节点也只会被插入到头部
37                 for (Node<K,V> p = f; p != lastRun; p = p.next) {
38                     int ph = p.hash; K pk = p.key; V pv = p.val;
39                     if ((ph & n) == 0)
40                         ln = new Node<K,V>(ph, pk, pv, ln);
41                     else
42                         hn = new Node<K,V>(ph, pk, pv, hn);
43                 }
44                 // ln链表被放入到原索引位置,hn放入到原索引 + 旧数组容量
45                 // 这一点与HashMap一致,如果看不懂请去参考本文对HashMap扩容的讲解
46                 setTabAt(nextTab, i, ln);
47                 setTabAt(nextTab, i + n, hn);
48                 setTabAt(tab, i, fwd); // 标记该bucket已被处理
49                 advance = true;
50             }
51             // 对红黑树的操作,逻辑与链表一样,按新增有效位进行分类
52             else if (f instanceof TreeBin) {
53                 TreeBin<K,V> t = (TreeBin<K,V>)f;
54                 TreeNode<K,V> lo = null, loTail = null;
55                 TreeNode<K,V> hi = null, hiTail = null;
56                 int lc = 0, hc = 0;
57                 for (Node<K,V> e = t.first; e != null; e = e.next) {
58                     int h = e.hash;
59                     TreeNode<K,V> p = new TreeNode<K,V>
60                         (h, e.key, e.val, null, null);
61                     if ((h & n) == 0) {
62                         if ((p.prev = loTail) == null)
63                             lo = p;
64                         else
65                             loTail.next = p;
66                         loTail = p;
67                         ++lc;
68                     }
69                     else {
70                         if ((p.prev = hiTail) == null)
71                             hi = p;
72                         else
73                             hiTail.next = p;
74                         hiTail = p;
75                         ++hc;
76                     }
77                 }
78                 // 元素数量没有超过UNTREEIFY_THRESHOLD,退化成链表
79                 ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
80                     (hc != 0) ? new TreeBin<K,V>(lo) : t;
81                 hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
82                     (lc != 0) ? new TreeBin<K,V>(hi) : t;
83                 setTabAt(nextTab, i, ln);
84                 setTabAt(nextTab, i + n, hn);
85                 setTabAt(tab, i, fwd);
86                 advance = true;
87             }
View Code

 3.4、计数

     在Java 7中ConcurrentHashMap对每个Segment单独计数,想要得到总数就需要获得所有Segment的锁,然后进行统计。由于Java 8抛弃了Segment,显然是不能再这样做了,而且这种方法虽然简单准确但也舍弃了性能。Java 8声明了一个volatile变量baseCount用于记录元素的个数,对这个变量的修改操作是基于CAS的,每当插入元素或删除元素时都会调用addCount()函数进行计数。

 1 private transient volatile long baseCount;
 2 private final void addCount(long x, int check) {
 3     CounterCell[] as; long b, s;
 4     // 尝试使用CAS更新baseCount失败
 5     // 转用CounterCells进行更新
 6     if ((as = counterCells) != null ||
 7         !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
 8         CounterCell a; long v; int m;
 9         boolean uncontended = true;
10         // 在CounterCells未初始化
11         // 或尝试通过CAS更新当前线程的CounterCell失败时
12         // 调用fullAddCount(),该函数负责初始化CounterCells和更新计数
13         if (as == null || (m = as.length - 1) < 0 ||
14             (a = as[ThreadLocalRandom.getProbe() & m]) == null ||
15             !(uncontended =
16               U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {
17             fullAddCount(x, uncontended);
18             return;
19         }
20         if (check <= 1)
21             return;
22         // 统计总数
23         s = sumCount();
24     }
25     if (check >= 0) {
26         // 判断是否需要扩容,在上文中已经讲过了
27     }
28 }
View Code

     counterCells是一个元素为CounterCell的数组,该数组的大小与当前机器的CPU数量有关,并且它不会被主动初始化,只有在调用fullAddCount()函数时才会进行初始化。CounterCell是一个简单的内部静态类,每个CounterCell都是一个用于记录数量的单元:

 1 /**
 2  * Table of counter cells. When non-null, size is a power of 2.
 3  */
 4 private transient volatile CounterCell[] counterCells;
 5 /**
 6  * A padded cell for distributing counts.  Adapted from LongAdder
 7  * and Striped64.  See their internal docs for explanation.
 8  */
 9 @sun.misc.Contended static final class CounterCell {
10     volatile long value;
11     CounterCell(long x) { value = x; }
12 }

     注解@sun.misc.Contended用于解决伪共享问题。所谓伪共享,即是在同一缓存行(CPU缓存的基本单位)中存储了多个变量,当其中一个变量被修改时,就会影响到同一缓存行内的其他变量,导致它们也要跟着被标记为失效,其他变量的缓存命中率将会受到影响。解决伪共享问题的方法一般是对该变量填充一些无意义的占位数据,从而使它独享一个缓存行。
     ConcurrentHashMap的计数设计与LongAdder类似。在一个低并发的情况下,就只是简单地使用CAS操作来对baseCount进行更新,但只要这个CAS操作失败一次,就代表有多个线程正在竞争,那么就转而使用CounterCell数组进行计数,数组内的每个ConuterCell都是一个独立的计数单元。每个线程都会通过ThreadLocalRandom.getProbe() & m寻址找到属于它的CounterCell,然后进行计数。ThreadLocalRandom是一个线程私有的伪随机数生成器,每个线程的probe都是不同的(这点基于ThreadLocalRandom的内部实现,它在内部维护了一个probeGenerator,这是一个类型为AtomicInteger的静态常量,每当初始化一个ThreadLocalRandom时probeGenerator都会先自增一个常量然后返回的整数即为当前线程的probe,probe变量被维护在Thread对象中),可以认为每个线程的probe就是它在CounterCell数组中的hash code。这种方法将竞争数据按照线程的粒度进行分离,相比所有竞争线程对一个共享变量使用CAS不断尝试在性能上要效率好多了,这也是为什么在高并发环境下LongAdder要优于AtomicInteger的原因。
     fullAddCount()函数根据当前线程的probe寻找对应的CounterCell进行计数,如果CounterCell数组未被初始化,则初始化CounterCell数组和CounterCell。该函数的实现与Striped64类(LongAdder的父类)的longAccumulate()函数是一样的,把CounterCell数组当成一个散列表,每个线程的probe就是hash code,散列函数也仅仅是简单的(n - 1) & probe。CounterCell数组的大小永远是一个2的n次方,初始容量为2,每次扩容的新容量都是之前容量乘以二,处于性能考虑,它的最大容量上限是机器的CPU数量。所以说CounterCell数组的碰撞冲突是很严重的,因为它的bucket基数太小了。而发生碰撞就代表着一个CounterCell会被多个线程竞争,为了解决这个问题,Doug Lea使用无限循环加上CAS来模拟出一个自旋锁来保证线程安全,自旋锁的实现基于一个被volatile修饰的整数变量,该变量只会有两种状态:0和1,当它被设置为0时表示没有加锁,当它被设置为1时表示已被其他线程加锁。这个自旋锁用于保护初始化CounterCell、初始化CounterCell数组以及对CounterCell数组进行扩容时的安全。CounterCell更新计数是依赖于CAS的,每次循环都会尝试通过CAS进行更新,如果成功就退出无限循环,否则就调用ThreadLocalRandom.advanceProbe()函数为当前线程更新probe,然后重新开始循环,以期望下一次寻址到的CounterCell没有被其他线程竞争。如果连着两次CAS更新都没有成功,那么会对CounterCell数组进行一次扩容,这个扩容操作只会在当前循环中触发一次,而且只能在容量小于上限时触发。
    fullAddCount()函数的主要流程如下:

1     首先检查当前线程有没有初始化过ThreadLocalRandom,如果没有则进行初始化。ThreadLocalRandom负责更新线程的probe,而probe又是在数组中进行寻址的关键。
2     检查CounterCell数组是否已经初始化,如果已初始化,那么就根据probe找到对应的CounterCell。
3     如果这个CounterCell等于null,需要先初始化CounterCell,通过把计数增量传入构造函数,所以初始化只要成功就说明更新计数已经完成了。初始化的过程需要获取自旋锁。
4     如果不为null,就按上文所说的逻辑对CounterCell实施更新计数。
5      CounterCell数组未被初始化,尝试获取自旋锁,进行初始化。数组初始化的过程会附带初始化一个CounterCell来记录计数增量,所以只要初始化成功就表示更新计数完成。
6      如果自旋锁被其他线程占用,无法进行数组的初始化,只好通过CAS更新baseCount。
  1 private final void fullAddCount(long x, boolean wasUncontended) {
  2     int h;
  3     // 当前线程的probe等于0,证明该线程的ThreadLocalRandom还未被初始化
  4     // 以及当前线程是第一次进入该函数
  5     if ((h = ThreadLocalRandom.getProbe()) == 0) {
  6         // 初始化ThreadLocalRandom,当前线程会被设置一个probe
  7         ThreadLocalRandom.localInit();      // force initialization
  8         // probe用于在CounterCell数组中寻址
  9         h = ThreadLocalRandom.getProbe();
 10         // 未竞争标志
 11         wasUncontended = true;
 12     }
 13     // 冲突标志
 14     boolean collide = false;                // True if last slot nonempty
 15     for (;;) {
 16         CounterCell[] as; CounterCell a; int n; long v;
 17         // CounterCell数组已初始化
 18         if ((as = counterCells) != null && (n = as.length) > 0) {
 19             // 如果寻址到的Cell为空,那么创建一个新的Cell
 20             if ((a = as[(n - 1) & h]) == null) {
 21                 // cellsBusy是一个只有0和1两个状态的volatile整数
 22                 // 它被当做一个自旋锁,0代表无锁,1代表加锁
 23                 if (cellsBusy == 0) {            // Try to attach new Cell
 24                     // 将传入的x作为初始值创建一个新的CounterCell
 25                     CounterCell r = new CounterCell(x); // Optimistic create
 26                     // 通过CAS尝试对自旋锁加锁
 27                     if (cellsBusy == 0 &&
 28                         U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
 29                         // 加锁成功,声明Cell是否创建成功的标志
 30                         boolean created = false;
 31                         try {               // Recheck under lock
 32                             CounterCell[] rs; int m, j;
 33                             // 再次检查CounterCell数组是否不为空
 34                             // 并且寻址到的Cell为空
 35                             if ((rs = counterCells) != null &&
 36                                 (m = rs.length) > 0 &&
 37                                 rs[j = (m - 1) & h] == null) {
 38                                 // 将之前创建的新Cell放入数组
 39                                 rs[j] = r;
 40                                 created = true;
 41                             }
 42                         } finally {
 43                             // 释放锁
 44                             cellsBusy = 0;
 45                         }
 46                         // 如果已经创建成功,中断循环
 47                         // 因为新Cell的初始值就是传入的增量,所以计数已经完毕了
 48                         if (created)
 49                             break;
 50                         // 如果未成功
 51                         // 代表as[(n - 1) & h]这个位置的Cell已经被其他线程设置
 52                         // 那么就从循环头重新开始
 53                         continue;           // Slot is now non-empty
 54                     }
 55                 }
 56                 collide = false;
 57             }
 58             // as[(n - 1) & h]非空
 59             // 在addCount()函数中通过CAS更新当前线程的Cell进行计数失败
 60             // 会传入wasUncontended = false,代表已经有其他线程进行竞争
 61             else if (!wasUncontended)       // CAS already known to fail
 62                 // 设置未竞争标志,之后会重新计算probe,然后重新执行循环
 63                 wasUncontended = true;      // Continue after rehash
 64             // 尝试进行计数,如果成功,那么就退出循环
 65             else if (U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))
 66                 break;
 67             // 尝试更新失败,检查counterCell数组是否已经扩容
 68             // 或者容量达到最大值(CPU的数量)
 69             else if (counterCells != as || n >= NCPU)
 70                 // 设置冲突标志,防止跳入下面的扩容分支
 71                 // 之后会重新计算probe
 72                 collide = false;            // At max size or stale
 73             // 设置冲突标志,重新执行循环
 74             // 如果下次循环执行到该分支,并且冲突标志仍然为true
 75             // 那么会跳过该分支,到下一个分支进行扩容
 76             else if (!collide)
 77                 collide = true;
 78             // 尝试加锁,然后对counterCells数组进行扩容
 79             else if (cellsBusy == 0 &&
 80                      U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
 81                 try {
 82                     // 检查是否已被扩容
 83                     if (counterCells == as) {// Expand table unless stale
 84                         // 新数组容量为之前的1倍
 85                         CounterCell[] rs = new CounterCell[n << 1];
 86                         // 迁移数据到新数组
 87                         for (int i = 0; i < n; ++i)
 88                             rs[i] = as[i];
 89                         counterCells = rs;
 90                     }
 91                 } finally {
 92                     // 释放锁
 93                     cellsBusy = 0;
 94                 }
 95                 collide = false;
 96                 // 重新执行循环
 97                 continue;                   // Retry with expanded table
 98             }
 99             // 为当前线程重新计算probe
100             h = ThreadLocalRandom.advanceProbe(h);
101         }
102         // CounterCell数组未初始化,尝试获取自旋锁,然后进行初始化
103         else if (cellsBusy == 0 && counterCells == as &&
104                  U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
105             boolean init = false;
106             try {                           // Initialize table
107                 if (counterCells == as) {
108                     // 初始化CounterCell数组,初始容量为2
109                     CounterCell[] rs = new CounterCell[2];
110                     // 初始化CounterCell
111                     rs[h & 1] = new CounterCell(x);
112                     counterCells = rs;
113                     init = true;
114                 }
115             } finally {
116                 cellsBusy = 0;
117             }
118             // 初始化CounterCell数组成功,退出循环
119             if (init)
120                 break;
121         }
122         // 如果自旋锁被占用,则只好尝试更新baseCount
123         else if (U.compareAndSwapLong(this, BASECOUNT, v = baseCount, v + x))
124             break;                          // Fall back on using base
125     }
126 }
View Code

     对于统计总数,只要能够理解CounterCell的思想,就很简单了。仔细想一想,每次计数的更新都会被分摊在baseCount和CounterCell数组中的某一CounterCell,想要获得总数,把它们统计相加就是了。

 1 public int size() {
 2     long n = sumCount();
 3     return ((n < 0L) ? 0 :
 4             (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
 5             (int)n);
 6 }
 7  final long sumCount() {
 8     CounterCell[] as = counterCells; CounterCell a;
 9     long sum = baseCount;
10     if (as != null) {
11         for (int i = 0; i < as.length; ++i) {
12             if ((a = as[i]) != null)
13                 sum += a.value;
14         }
15     }
16     return sum;
17 }

     其实size()函数返回的总数可能并不是百分百精确的,试想如果前一个遍历过的CounterCell又进行了更新会怎么样?尽管只是一个估算值,但在大多数场景下都还能接受,而且性能上是要比Java 7好上太多了。

 3.5、其他操作

    添加元素的主要逻辑与HashMap没什么区别,所以整体来说putVal()函数还是比较简单的,可能唯一需要注意的就是在对节点进行操作的时候需要通过互斥锁保证线程安全,这个互斥锁的粒度很小,只对需要操作的这个bucket加锁。

 1 public V put(K key, V value) {
 2     return putVal(key, value, false);
 3 }
 4 /** Implementation for put and putIfAbsent */
 5 final V putVal(K key, V value, boolean onlyIfAbsent) {
 6     if (key == null || value == null) throw new NullPointerException();
 7     int hash = spread(key.hashCode());
 8     int binCount = 0; // 节点计数器,用于判断是否需要树化
 9     // 无限循环+CAS,无锁的标准套路
10     for (Node<K,V>[] tab = table;;) {
11         Node<K,V> f; int n, i, fh;
12         // 初始化table
13         if (tab == null || (n = tab.length) == 0)
14             tab = initTable();
15         // bucket为null,通过CAS创建头节点,如果成功就结束循环
16         else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
17             if (casTabAt(tab, i, null,
18                          new Node<K,V>(hash, key, value, null)))
19                 break;                   // no lock when adding to empty bin
20         }
21         // bucket为ForwardingNode
22         // 当前线程前去协助进行扩容
23         else if ((fh = f.hash) == MOVED)
24             tab = helpTransfer(tab, f);
25         else {
26             V oldVal = null;
27             synchronized (f) {
28                 if (tabAt(tab, i) == f) {
29                     // 节点是链表
30                     if (fh >= 0) {
31                         binCount = 1;
32                         for (Node<K,V> e = f;; ++binCount) {
33                             K ek;
34                             // 找到目标,设置value
35                             if (e.hash == hash &&
36                                 ((ek = e.key) == key ||
37                                  (ek != null && key.equals(ek)))) {
38                                 oldVal = e.val;
39                                 if (!onlyIfAbsent)
40                                     e.val = value;
41                                 break;
42                             }
43                             Node<K,V> pred = e;
44                             // 未找到节点,插入新节点到链表尾部
45                             if ((e = e.next) == null) {
46                                 pred.next = new Node<K,V>(hash, key,
47                                                           value, null);
48                                 break;
49                             }
50                         }
51                     }
52                     // 节点是红黑树
53                     else if (f instanceof TreeBin) {
54                         Node<K,V> p;
55                         binCount = 2;
56                         if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
57                                                        value)) != null) {
58                             oldVal = p.val;
59                             if (!onlyIfAbsent)
60                                 p.val = value;
61                         }
62                     }
63                 }
64             }
65             // 根据bucket中的节点数决定是否树化
66             if (binCount != 0) {
67                 if (binCount >= TREEIFY_THRESHOLD)
68                     treeifyBin(tab, i);
69                 // oldVal不等于null,说明没有新节点
70                 // 所以直接返回,不进行计数
71                 if (oldVal != null)
72                     return oldVal;
73                 break;
74             }
75         }
76     }
77     // 计数
78     addCount(1L, binCount);
79     return null;
80 }
View Code

    至于删除元素的操作位于函数replaceNode(Object key, V value, Object cv),当table[key].val等于期望值cv时(或cv等于null),更新节点的值为value,如果value等于null,那么删除该节点。
    remove()函数通过调用replaceNode(key, null, null)来达成删除目标节点的目的,replaceNode()的具体实现与putVal()没什么差别,只不过对链表的操作有所不同而已。

四、Hashtable介绍

1 和HashMap一样,Hashtable 也是一个散列表,它存储的内容是键值对(key-value)映射。
2 Hashtable 继承于Dictionary,实现了Map、Cloneable、java.io.Serializable接口。
3 Hashtable 的函数都是同步的,这意味着它是线程安全的。它的key、value都不可以为null。
4 此外,Hashtable中的映射不是有序的。

    Hashtable的实例有两个参数影响其性能:初始容量和加载因子。容量是哈希表中桶 的数量,初始容量就是哈希表创建时的容量。注意,哈希表的状态为 open:在发生“哈希冲突”的情况下,单个桶会存储多个条目,这些条目必须按顺序搜索。加载因子是对哈希表在其容量自动增加之前可以达到多满的一个尺度。初始容量和加载因子这两个参数只是对该实现的提示。关于何时以及是否调用 rehash 方法的具体细节则依赖于该实现。通常,默认加载因子是 0.75, 这是在时间和空间成本上寻求一种折衷。

  1 package java.util;
  2 import java.io.*;
  3 
  4 public class Hashtable<K,V>
  5     extends Dictionary<K,V>
  6     implements Map<K,V>, Cloneable, java.io.Serializable {
  7 
  8     // Hashtable保存key-value的数组。
  9     // Hashtable是采用拉链法实现的,每一个Entry本质上是一个单向链表
 10     private transient Entry[] table;
 11 
 12     // Hashtable中元素的实际数量
 13     private transient int count;
 14 
 15     // 阈值,用于判断是否需要调整Hashtable的容量(threshold = 容量*加载因子)
 16     private int threshold;
 17 
 18     // 加载因子
 19     private float loadFactor;
 20 
 21     // Hashtable被改变的次数
 22     private transient int modCount = 0;
 23 
 24     // 序列版本号
 25     private static final long serialVersionUID = 1421746759512286392L;
 26 
 27     // 指定“容量大小”和“加载因子”的构造函数
 28     public Hashtable(int initialCapacity, float loadFactor) {
 29         if (initialCapacity < 0)
 30             throw new IllegalArgumentException("Illegal Capacity: "+
 31                                                initialCapacity);
 32         if (loadFactor <= 0 || Float.isNaN(loadFactor))
 33             throw new IllegalArgumentException("Illegal Load: "+loadFactor);
 34 
 35         if (initialCapacity==0)
 36             initialCapacity = 1;
 37         this.loadFactor = loadFactor;
 38         table = new Entry[initialCapacity];
 39         threshold = (int)(initialCapacity * loadFactor);
 40     }
 41 
 42     // 指定“容量大小”的构造函数
 43     public Hashtable(int initialCapacity) {
 44         this(initialCapacity, 0.75f);
 45     }
 46 
 47     // 默认构造函数。
 48     public Hashtable() {
 49         // 默认构造函数,指定的容量大小是11;加载因子是0.75
 50         this(11, 0.75f);
 51     }
 52 
 53     // 包含“子Map”的构造函数
 54     public Hashtable(Map<? extends K, ? extends V> t) {
 55         this(Math.max(2*t.size(), 11), 0.75f);
 56         // 将“子Map”的全部元素都添加到Hashtable中
 57         putAll(t);
 58     }
 59 
 60     public synchronized int size() {
 61         return count;
 62     }
 63 
 64     public synchronized boolean isEmpty() {
 65         return count == 0;
 66     }
 67 
 68     // 返回“所有key”的枚举对象
 69     public synchronized Enumeration<K> keys() {
 70         return this.<K>getEnumeration(KEYS);
 71     }
 72 
 73     // 返回“所有value”的枚举对象
 74     public synchronized Enumeration<V> elements() {
 75         return this.<V>getEnumeration(VALUES);
 76     }
 77 
 78     // 判断Hashtable是否包含“值(value)”
 79     public synchronized boolean contains(Object value) {
 80         // Hashtable中“键值对”的value不能是null,
 81         // 若是null的话,抛出异常!
 82         if (value == null) {
 83             throw new NullPointerException();
 84         }
 85 
 86         // 从后向前遍历table数组中的元素(Entry)
 87         // 对于每个Entry(单向链表),逐个遍历,判断节点的值是否等于value
 88         Entry tab[] = table;
 89         for (int i = tab.length ; i-- > 0 ;) {
 90             for (Entry<K,V> e = tab[i] ; e != null ; e = e.next) {
 91                 if (e.value.equals(value)) {
 92                     return true;
 93                 }
 94             }
 95         }
 96         return false;
 97     }
 98 
 99     public boolean containsValue(Object value) {
100         return contains(value);
101     }
102 
103     // 判断Hashtable是否包含key
104     public synchronized boolean containsKey(Object key) {
105         Entry tab[] = table;
106         int hash = key.hashCode();
107         // 计算索引值,
108         // % tab.length 的目的是防止数据越界
109         int index = (hash & 0x7FFFFFFF) % tab.length;
110         // 找到“key对应的Entry(链表)”,然后在链表中找出“哈希值”和“键值”与key都相等的元素
111         for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
112             if ((e.hash == hash) && e.key.equals(key)) {
113                 return true;
114             }
115         }
116         return false;
117     }
118 
119     // 返回key对应的value,没有的话返回null
120     public synchronized V get(Object key) {
121         Entry tab[] = table;
122         int hash = key.hashCode();
123         // 计算索引值,
124         int index = (hash & 0x7FFFFFFF) % tab.length;
125         // 找到“key对应的Entry(链表)”,然后在链表中找出“哈希值”和“键值”与key都相等的元素
126         for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
127             if ((e.hash == hash) && e.key.equals(key)) {
128                 return e.value;
129             }
130         }
131         return null;
132     }
133 
134     // 调整Hashtable的长度,将长度变成原来的(2倍+1)
135     // (01) 将“旧的Entry数组”赋值给一个临时变量。
136     // (02) 创建一个“新的Entry数组”,并赋值给“旧的Entry数组”
137     // (03) 将“Hashtable”中的全部元素依次添加到“新的Entry数组”中
138     protected void rehash() {
139         int oldCapacity = table.length;
140         Entry[] oldMap = table;
141 
142         int newCapacity = oldCapacity * 2 + 1;
143         Entry[] newMap = new Entry[newCapacity];
144 
145         modCount++;
146         threshold = (int)(newCapacity * loadFactor);
147         table = newMap;
148 
149         for (int i = oldCapacity ; i-- > 0 ;) {
150             for (Entry<K,V> old = oldMap[i] ; old != null ; ) {
151                 Entry<K,V> e = old;
152                 old = old.next;
153 
154                 int index = (e.hash & 0x7FFFFFFF) % newCapacity;
155                 e.next = newMap[index];
156                 newMap[index] = e;
157             }
158         }
159     }
160 
161     // 将“key-value”添加到Hashtable中
162     public synchronized V put(K key, V value) {
163         // Hashtable中不能插入value为null的元素!!!
164         if (value == null) {
165             throw new NullPointerException();
166         }
167 
168         // 若“Hashtable中已存在键为key的键值对”,
169         // 则用“新的value”替换“旧的value”
170         Entry tab[] = table;
171         int hash = key.hashCode();
172         int index = (hash & 0x7FFFFFFF) % tab.length;
173         for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
174             if ((e.hash == hash) && e.key.equals(key)) {
175                 V old = e.value;
176                 e.value = value;
177                 return old;
178                 }
179         }
180 
181         // 若“Hashtable中不存在键为key的键值对”,
182         // (01) 将“修改统计数”+1
183         modCount++;
184         // (02) 若“Hashtable实际容量” > “阈值”(阈值=总的容量 * 加载因子)
185         //  则调整Hashtable的大小
186         if (count >= threshold) {
187             // Rehash the table if the threshold is exceeded
188             rehash();
189 
190             tab = table;
191             index = (hash & 0x7FFFFFFF) % tab.length;
192         }
193 
194         // (03) 将“Hashtable中index”位置的Entry(链表)保存到e中
195         Entry<K,V> e = tab[index];
196         // (04) 创建“新的Entry节点”,并将“新的Entry”插入“Hashtable的index位置”,并设置e为“新的Entry”的下一个元素(即“新Entry”为链表表头)。        
197         tab[index] = new Entry<K,V>(hash, key, value, e);
198         // (05) 将“Hashtable的实际容量”+1
199         count++;
200         return null;
201     }
202 
203     // 删除Hashtable中键为key的元素
204     public synchronized V remove(Object key) {
205         Entry tab[] = table;
206         int hash = key.hashCode();
207         int index = (hash & 0x7FFFFFFF) % tab.length;
208         // 找到“key对应的Entry(链表)”
209         // 然后在链表中找出要删除的节点,并删除该节点。
210         for (Entry<K,V> e = tab[index], prev = null ; e != null ; prev = e, e = e.next) {
211             if ((e.hash == hash) && e.key.equals(key)) {
212                 modCount++;
213                 if (prev != null) {
214                     prev.next = e.next;
215                 } else {
216                     tab[index] = e.next;
217                 }
218                 count--;
219                 V oldValue = e.value;
220                 e.value = null;
221                 return oldValue;
222             }
223         }
224         return null;
225     }
226 
227     // 将“Map(t)”的中全部元素逐一添加到Hashtable中
228     public synchronized void putAll(Map<? extends K, ? extends V> t) {
229         for (Map.Entry<? extends K, ? extends V> e : t.entrySet())
230             put(e.getKey(), e.getValue());
231     }
232 
233     // 清空Hashtable
234     // 将Hashtable的table数组的值全部设为null
235     public synchronized void clear() {
236         Entry tab[] = table;
237         modCount++;
238         for (int index = tab.length; --index >= 0; )
239             tab[index] = null;
240         count = 0;
241     }
242 
243     // 克隆一个Hashtable,并以Object的形式返回。
244     public synchronized Object clone() {
245         try {
246             Hashtable<K,V> t = (Hashtable<K,V>) super.clone();
247             t.table = new Entry[table.length];
248             for (int i = table.length ; i-- > 0 ; ) {
249                 t.table[i] = (table[i] != null)
250                 ? (Entry<K,V>) table[i].clone() : null;
251             }
252             t.keySet = null;
253             t.entrySet = null;
254             t.values = null;
255             t.modCount = 0;
256             return t;
257         } catch (CloneNotSupportedException e) {
258             // this shouldn't happen, since we are Cloneable
259             throw new InternalError();
260         }
261     }
262 
263     public synchronized String toString() {
264         int max = size() - 1;
265         if (max == -1)
266             return "{}";
267 
268         StringBuilder sb = new StringBuilder();
269         Iterator<Map.Entry<K,V>> it = entrySet().iterator();
270 
271         sb.append('{');
272         for (int i = 0; ; i++) {
273             Map.Entry<K,V> e = it.next();
274             K key = e.getKey();
275             V value = e.getValue();
276             sb.append(key   == this ? "(this Map)" : key.toString());
277             sb.append('=');
278             sb.append(value == this ? "(this Map)" : value.toString());
279 
280             if (i == max)
281                 return sb.append('}').toString();
282             sb.append(", ");
283         }
284     }
285 
286     // 获取Hashtable的枚举类对象
287     // 若Hashtable的实际大小为0,则返回“空枚举类”对象;
288     // 否则,返回正常的Enumerator的对象。(Enumerator实现了迭代器和枚举两个接口)
289     private <T> Enumeration<T> getEnumeration(int type) {
290     if (count == 0) {
291         return (Enumeration<T>)emptyEnumerator;
292     } else {
293         return new Enumerator<T>(type, false);
294     }
295     }
296 
297     // 获取Hashtable的迭代器
298     // 若Hashtable的实际大小为0,则返回“空迭代器”对象;
299     // 否则,返回正常的Enumerator的对象。(Enumerator实现了迭代器和枚举两个接口)
300     private <T> Iterator<T> getIterator(int type) {
301         if (count == 0) {
302             return (Iterator<T>) emptyIterator;
303         } else {
304             return new Enumerator<T>(type, true);
305         }
306     }
307 
308     // Hashtable的“key的集合”。它是一个Set,意味着没有重复元素
309     private transient volatile Set<K> keySet = null;
310     // Hashtable的“key-value的集合”。它是一个Set,意味着没有重复元素
311     private transient volatile Set<Map.Entry<K,V>> entrySet = null;
312     // Hashtable的“key-value的集合”。它是一个Collection,意味着可以有重复元素
313     private transient volatile Collection<V> values = null;
314 
315     // 返回一个被synchronizedSet封装后的KeySet对象
316     // synchronizedSet封装的目的是对KeySet的所有方法都添加synchronized,实现多线程同步
317     public Set<K> keySet() {
318         if (keySet == null)
319             keySet = Collections.synchronizedSet(new KeySet(), this);
320         return keySet;
321     }
322 
323     // Hashtable的Key的Set集合。
324     // KeySet继承于AbstractSet,所以,KeySet中的元素没有重复的。
325     private class KeySet extends AbstractSet<K> {
326         public Iterator<K> iterator() {
327             return getIterator(KEYS);
328         }
329         public int size() {
330             return count;
331         }
332         public boolean contains(Object o) {
333             return containsKey(o);
334         }
335         public boolean remove(Object o) {
336             return Hashtable.this.remove(o) != null;
337         }
338         public void clear() {
339             Hashtable.this.clear();
340         }
341     }
342 
343     // 返回一个被synchronizedSet封装后的EntrySet对象
344     // synchronizedSet封装的目的是对EntrySet的所有方法都添加synchronized,实现多线程同步
345     public Set<Map.Entry<K,V>> entrySet() {
346         if (entrySet==null)
347             entrySet = Collections.synchronizedSet(new EntrySet(), this);
348         return entrySet;
349     }
350 
351     // Hashtable的Entry的Set集合。
352     // EntrySet继承于AbstractSet,所以,EntrySet中的元素没有重复的。
353     private class EntrySet extends AbstractSet<Map.Entry<K,V>> {
354         public Iterator<Map.Entry<K,V>> iterator() {
355             return getIterator(ENTRIES);
356         }
357 
358         public boolean add(Map.Entry<K,V> o) {
359             return super.add(o);
360         }
361 
362         // 查找EntrySet中是否包含Object(0)
363         // 首先,在table中找到o对应的Entry(Entry是一个单向链表)
364         // 然后,查找Entry链表中是否存在Object
365         public boolean contains(Object o) {
366             if (!(o instanceof Map.Entry))
367                 return false;
368             Map.Entry entry = (Map.Entry)o;
369             Object key = entry.getKey();
370             Entry[] tab = table;
371             int hash = key.hashCode();
372             int index = (hash & 0x7FFFFFFF) % tab.length;
373 
374             for (Entry e = tab[index]; e != null; e = e.next)
375                 if (e.hash==hash && e.equals(entry))
376                     return true;
377             return false;
378         }
379 
380         // 删除元素Object(0)
381         // 首先,在table中找到o对应的Entry(Entry是一个单向链表)
382         // 然后,删除链表中的元素Object
383         public boolean remove(Object o) {
384             if (!(o instanceof Map.Entry))
385                 return false;
386             Map.Entry<K,V> entry = (Map.Entry<K,V>) o;
387             K key = entry.getKey();
388             Entry[] tab = table;
389             int hash = key.hashCode();
390             int index = (hash & 0x7FFFFFFF) % tab.length;
391 
392             for (Entry<K,V> e = tab[index], prev = null; e != null;
393                  prev = e, e = e.next) {
394                 if (e.hash==hash && e.equals(entry)) {
395                     modCount++;
396                     if (prev != null)
397                         prev.next = e.next;
398                     else
399                         tab[index] = e.next;
400 
401                     count--;
402                     e.value = null;
403                     return true;
404                 }
405             }
406             return false;
407         }
408 
409         public int size() {
410             return count;
411         }
412 
413         public void clear() {
414             Hashtable.this.clear();
415         }
416     }
417 
418     // 返回一个被synchronizedCollection封装后的ValueCollection对象
419     // synchronizedCollection封装的目的是对ValueCollection的所有方法都添加synchronized,实现多线程同步
420     public Collection<V> values() {
421     if (values==null)
422         values = Collections.synchronizedCollection(new ValueCollection(),
423                                                         this);
424         return values;
425     }
426 
427     // Hashtable的value的Collection集合。
428     // ValueCollection继承于AbstractCollection,所以,ValueCollection中的元素可以重复的。
429     private class ValueCollection extends AbstractCollection<V> {
430         public Iterator<V> iterator() {
431         return getIterator(VALUES);
432         }
433         public int size() {
434             return count;
435         }
436         public boolean contains(Object o) {
437             return containsValue(o);
438         }
439         public void clear() {
440             Hashtable.this.clear();
441         }
442     }
443 
444     // 重新equals()函数
445     // 若两个Hashtable的所有key-value键值对都相等,则判断它们两个相等
446     public synchronized boolean equals(Object o) {
447         if (o == this)
448             return true;
449 
450         if (!(o instanceof Map))
451             return false;
452         Map<K,V> t = (Map<K,V>) o;
453         if (t.size() != size())
454             return false;
455 
456         try {
457             // 通过迭代器依次取出当前Hashtable的key-value键值对
458             // 并判断该键值对,存在于Hashtable(o)中。
459             // 若不存在,则立即返回false;否则,遍历完“当前Hashtable”并返回true。
460             Iterator<Map.Entry<K,V>> i = entrySet().iterator();
461             while (i.hasNext()) {
462                 Map.Entry<K,V> e = i.next();
463                 K key = e.getKey();
464                 V value = e.getValue();
465                 if (value == null) {
466                     if (!(t.get(key)==null && t.containsKey(key)))
467                         return false;
468                 } else {
469                     if (!value.equals(t.get(key)))
470                         return false;
471                 }
472             }
473         } catch (ClassCastException unused)   {
474             return false;
475         } catch (NullPointerException unused) {
476             return false;
477         }
478 
479         return true;
480     }
481 
482     // 计算Hashtable的哈希值
483     // 若 Hashtable的实际大小为0 或者 加载因子<0,则返回0。
484     // 否则,返回“Hashtable中的每个Entry的key和value的异或值 的总和”。
485     public synchronized int hashCode() {
486         int h = 0;
487         if (count == 0 || loadFactor < 0)
488             return h;  // Returns zero
489 
490         loadFactor = -loadFactor;  // Mark hashCode computation in progress
491         Entry[] tab = table;
492         for (int i = 0; i < tab.length; i++)
493             for (Entry e = tab[i]; e != null; e = e.next)
494                 h += e.key.hashCode() ^ e.value.hashCode();
495         loadFactor = -loadFactor;  // Mark hashCode computation complete
496 
497         return h;
498     }
499 
500     // java.io.Serializable的写入函数
501     // 将Hashtable的“总的容量,实际容量,所有的Entry”都写入到输出流中
502     private synchronized void writeObject(java.io.ObjectOutputStream s)
503         throws IOException
504     {
505         // Write out the length, threshold, loadfactor
506         s.defaultWriteObject();
507 
508         // Write out length, count of elements and then the key/value objects
509         s.writeInt(table.length);
510         s.writeInt(count);
511         for (int index = table.length-1; index >= 0; index--) {
512             Entry entry = table[index];
513 
514             while (entry != null) {
515             s.writeObject(entry.key);
516             s.writeObject(entry.value);
517             entry = entry.next;
518             }
519         }
520     }
521 
522     // java.io.Serializable的读取函数:根据写入方式读出
523     // 将Hashtable的“总的容量,实际容量,所有的Entry”依次读出
524     private void readObject(java.io.ObjectInputStream s)
525          throws IOException, ClassNotFoundException
526     {
527         // Read in the length, threshold, and loadfactor
528         s.defaultReadObject();
529 
530         // Read the original length of the array and number of elements
531         int origlength = s.readInt();
532         int elements = s.readInt();
533 
534         // Compute new size with a bit of room 5% to grow but
535         // no larger than the original size.  Make the length
536         // odd if it's large enough, this helps distribute the entries.
537         // Guard against the length ending up zero, that's not valid.
538         int length = (int)(elements * loadFactor) + (elements / 20) + 3;
539         if (length > elements && (length & 1) == 0)
540             length--;
541         if (origlength > 0 && length > origlength)
542             length = origlength;
543 
544         Entry[] table = new Entry[length];
545         count = 0;
546 
547         // Read the number of elements and then all the key/value objects
548         for (; elements > 0; elements--) {
549             K key = (K)s.readObject();
550             V value = (V)s.readObject();
551                 // synch could be eliminated for performance
552                 reconstitutionPut(table, key, value);
553         }
554         this.table = table;
555     }
556 
557     private void reconstitutionPut(Entry[] tab, K key, V value)
558         throws StreamCorruptedException
559     {
560         if (value == null) {
561             throw new java.io.StreamCorruptedException();
562         }
563         // Makes sure the key is not already in the hashtable.
564         // This should not happen in deserialized version.
565         int hash = key.hashCode();
566         int index = (hash & 0x7FFFFFFF) % tab.length;
567         for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
568             if ((e.hash == hash) && e.key.equals(key)) {
569                 throw new java.io.StreamCorruptedException();
570             }
571         }
572         // Creates the new entry.
573         Entry<K,V> e = tab[index];
574         tab[index] = new Entry<K,V>(hash, key, value, e);
575         count++;
576     }
577 
578     // Hashtable的Entry节点,它本质上是一个单向链表。
579     // 也因此,我们才能推断出Hashtable是由拉链法实现的散列表
580     private static class Entry<K,V> implements Map.Entry<K,V> {
581         // 哈希值
582         int hash;
583         K key;
584         V value;
585         // 指向的下一个Entry,即链表的下一个节点
586         Entry<K,V> next;
587 
588         // 构造函数
589         protected Entry(int hash, K key, V value, Entry<K,V> next) {
590             this.hash = hash;
591             this.key = key;
592             this.value = value;
593             this.next = next;
594         }
595 
596         protected Object clone() {
597             return new Entry<K,V>(hash, key, value,
598                   (next==null ? null : (Entry<K,V>) next.clone()));
599         }
600 
601         public K getKey() {
602             return key;
603         }
604 
605         public V getValue() {
606             return value;
607         }
608 
609         // 设置value。若value是null,则抛出异常。
610         public V setValue(V value) {
611             if (value == null)
612                 throw new NullPointerException();
613 
614             V oldValue = this.value;
615             this.value = value;
616             return oldValue;
617         }
618 
619         // 覆盖equals()方法,判断两个Entry是否相等。
620         // 若两个Entry的key和value都相等,则认为它们相等。
621         public boolean equals(Object o) {
622             if (!(o instanceof Map.Entry))
623                 return false;
624             Map.Entry e = (Map.Entry)o;
625 
626             return (key==null ? e.getKey()==null : key.equals(e.getKey())) &&
627                (value==null ? e.getValue()==null : value.equals(e.getValue()));
628         }
629 
630         public int hashCode() {
631             return hash ^ (value==null ? 0 : value.hashCode());
632         }
633 
634         public String toString() {
635             return key.toString()+"="+value.toString();
636         }
637     }
638 
639     private static final int KEYS = 0;
640     private static final int VALUES = 1;
641     private static final int ENTRIES = 2;
642 
643     // Enumerator的作用是提供了“通过elements()遍历Hashtable的接口” 和 “通过entrySet()遍历Hashtable的接口”。因为,它同时实现了 “Enumerator接口”和“Iterator接口”。
644     private class Enumerator<T> implements Enumeration<T>, Iterator<T> {
645         // 指向Hashtable的table
646         Entry[] table = Hashtable.this.table;
647         // Hashtable的总的大小
648         int index = table.length;
649         Entry<K,V> entry = null;
650         Entry<K,V> lastReturned = null;
651         int type;
652 
653         // Enumerator是 “迭代器(Iterator)” 还是 “枚举类(Enumeration)”的标志
654         // iterator为true,表示它是迭代器;否则,是枚举类。
655         boolean iterator;
656 
657         // 在将Enumerator当作迭代器使用时会用到,用来实现fail-fast机制。
658         protected int expectedModCount = modCount;
659 
660         Enumerator(int type, boolean iterator) {
661             this.type = type;
662             this.iterator = iterator;
663         }
664 
665         // 从遍历table的数组的末尾向前查找,直到找到不为null的Entry。
666         public boolean hasMoreElements() {
667             Entry<K,V> e = entry;
668             int i = index;
669             Entry[] t = table;
670             /* Use locals for faster loop iteration */
671             while (e == null && i > 0) {
672                 e = t[--i];
673             }
674             entry = e;
675             index = i;
676             return e != null;
677         }
678 
679         // 获取下一个元素
680         // 注意:从hasMoreElements() 和nextElement() 可以看出“Hashtable的elements()遍历方式”
681         // 首先,从后向前的遍历table数组。table数组的每个节点都是一个单向链表(Entry)。
682         // 然后,依次向后遍历单向链表Entry。
683         public T nextElement() {
684             Entry<K,V> et = entry;
685             int i = index;
686             Entry[] t = table;
687             /* Use locals for faster loop iteration */
688             while (et == null && i > 0) {
689                 et = t[--i];
690             }
691             entry = et;
692             index = i;
693             if (et != null) {
694                 Entry<K,V> e = lastReturned = entry;
695                 entry = e.next;
696                 return type == KEYS ? (T)e.key : (type == VALUES ? (T)e.value : (T)e);
697             }
698             throw new NoSuchElementException("Hashtable Enumerator");
699         }
700 
701         // 迭代器Iterator的判断是否存在下一个元素
702         // 实际上,它是调用的hasMoreElements()
703         public boolean hasNext() {
704             return hasMoreElements();
705         }
706 
707         // 迭代器获取下一个元素
708         // 实际上,它是调用的nextElement()
709         public T next() {
710             if (modCount != expectedModCount)
711                 throw new ConcurrentModificationException();
712             return nextElement();
713         }
714 
715         // 迭代器的remove()接口。
716         // 首先,它在table数组中找出要删除元素所在的Entry,
717         // 然后,删除单向链表Entry中的元素。
718         public void remove() {
719             if (!iterator)
720                 throw new UnsupportedOperationException();
721             if (lastReturned == null)
722                 throw new IllegalStateException("Hashtable Enumerator");
723             if (modCount != expectedModCount)
724                 throw new ConcurrentModificationException();
725 
726             synchronized(Hashtable.this) {
727                 Entry[] tab = Hashtable.this.table;
728                 int index = (lastReturned.hash & 0x7FFFFFFF) % tab.length;
729 
730                 for (Entry<K,V> e = tab[index], prev = null; e != null;
731                      prev = e, e = e.next) {
732                     if (e == lastReturned) {
733                         modCount++;
734                         expectedModCount++;
735                         if (prev == null)
736                             tab[index] = e.next;
737                         else
738                             prev.next = e.next;
739                         count--;
740                         lastReturned = null;
741                         return;
742                     }
743                 }
744                 throw new ConcurrentModificationException();
745             }
746         }
747     }
748 
749 
750     private static Enumeration emptyEnumerator = new EmptyEnumerator();
751     private static Iterator emptyIterator = new EmptyIterator();
752 
753     // 空枚举类
754     // 当Hashtable的实际大小为0;此时,又要通过Enumeration遍历Hashtable时,返回的是“空枚举类”的对象。
755     private static class EmptyEnumerator implements Enumeration<Object> {
756 
757         EmptyEnumerator() {
758         }
759 
760         // 空枚举类的hasMoreElements() 始终返回false
761         public boolean hasMoreElements() {
762             return false;
763         }
764 
765         // 空枚举类的nextElement() 抛出异常
766         public Object nextElement() {
767             throw new NoSuchElementException("Hashtable Enumerator");
768         }
769     }
770 
771 
772     // 空迭代器
773     // 当Hashtable的实际大小为0;此时,又要通过迭代器遍历Hashtable时,返回的是“空迭代器”的对象。
774     private static class EmptyIterator implements Iterator<Object> {
775 
776         EmptyIterator() {
777         }
778 
779         public boolean hasNext() {
780             return false;
781         }
782 
783         public Object next() {
784             throw new NoSuchElementException("Hashtable Iterator");
785         }
786 
787         public void remove() {
788             throw new IllegalStateException("Hashtable Iterator");
789         }
790 
791     }
792 }
jdk1.6的Hashtable源码解析
   1 /*
   2  * Copyright (c) 1994, 2013, Oracle and/or its affiliates. All rights reserved.
   3  * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
   4  *
   5  *
   6  *
   7  *
   8  *
   9  *
  10  *
  11  *
  12  *
  13  *
  14  *
  15  *
  16  *
  17  *
  18  *
  19  *
  20  *
  21  *
  22  *
  23  *
  24  */
  25 
  26 package java.util;
  27 
  28 import java.io.*;
  29 import java.util.concurrent.ThreadLocalRandom;
  30 import java.util.function.BiConsumer;
  31 import java.util.function.Function;
  32 import java.util.function.BiFunction;
  33 
  34 /**
  35  * This class implements a hash table, which maps keys to values. Any
  36  * non-<code>null</code> object can be used as a key or as a value. <p>
  37  *
  38  * To successfully store and retrieve objects from a hashtable, the
  39  * objects used as keys must implement the <code>hashCode</code>
  40  * method and the <code>equals</code> method. <p>
  41  *
  42  * An instance of <code>Hashtable</code> has two parameters that affect its
  43  * performance: <i>initial capacity</i> and <i>load factor</i>.  The
  44  * <i>capacity</i> is the number of <i>buckets</i> in the hash table, and the
  45  * <i>initial capacity</i> is simply the capacity at the time the hash table
  46  * is created.  Note that the hash table is <i>open</i>: in the case of a "hash
  47  * collision", a single bucket stores multiple entries, which must be searched
  48  * sequentially.  The <i>load factor</i> is a measure of how full the hash
  49  * table is allowed to get before its capacity is automatically increased.
  50  * The initial capacity and load factor parameters are merely hints to
  51  * the implementation.  The exact details as to when and whether the rehash
  52  * method is invoked are implementation-dependent.<p>
  53  *
  54  * Generally, the default load factor (.75) offers a good tradeoff between
  55  * time and space costs.  Higher values decrease the space overhead but
  56  * increase the time cost to look up an entry (which is reflected in most
  57  * <tt>Hashtable</tt> operations, including <tt>get</tt> and <tt>put</tt>).<p>
  58  *
  59  * The initial capacity controls a tradeoff between wasted space and the
  60  * need for <code>rehash</code> operations, which are time-consuming.
  61  * No <code>rehash</code> operations will <i>ever</i> occur if the initial
  62  * capacity is greater than the maximum number of entries the
  63  * <tt>Hashtable</tt> will contain divided by its load factor.  However,
  64  * setting the initial capacity too high can waste space.<p>
  65  *
  66  * If many entries are to be made into a <code>Hashtable</code>,
  67  * creating it with a sufficiently large capacity may allow the
  68  * entries to be inserted more efficiently than letting it perform
  69  * automatic rehashing as needed to grow the table. <p>
  70  *
  71  * This example creates a hashtable of numbers. It uses the names of
  72  * the numbers as keys:
  73  * <pre>   {@code
  74  *   Hashtable<String, Integer> numbers
  75  *     = new Hashtable<String, Integer>();
  76  *   numbers.put("one", 1);
  77  *   numbers.put("two", 2);
  78  *   numbers.put("three", 3);}</pre>
  79  *
  80  * <p>To retrieve a number, use the following code:
  81  * <pre>   {@code
  82  *   Integer n = numbers.get("two");
  83  *   if (n != null) {
  84  *     System.out.println("two = " + n);
  85  *   }}</pre>
  86  *
  87  * <p>The iterators returned by the <tt>iterator</tt> method of the collections
  88  * returned by all of this class's "collection view methods" are
  89  * <em>fail-fast</em>: if the Hashtable is structurally modified at any time
  90  * after the iterator is created, in any way except through the iterator's own
  91  * <tt>remove</tt> method, the iterator will throw a {@link
  92  * ConcurrentModificationException}.  Thus, in the face of concurrent
  93  * modification, the iterator fails quickly and cleanly, rather than risking
  94  * arbitrary, non-deterministic behavior at an undetermined time in the future.
  95  * The Enumerations returned by Hashtable's keys and elements methods are
  96  * <em>not</em> fail-fast.
  97  *
  98  * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
  99  * as it is, generally speaking, impossible to make any hard guarantees in the
 100  * presence of unsynchronized concurrent modification.  Fail-fast iterators
 101  * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
 102  * Therefore, it would be wrong to write a program that depended on this
 103  * exception for its correctness: <i>the fail-fast behavior of iterators
 104  * should be used only to detect bugs.</i>
 105  *
 106  * <p>As of the Java 2 platform v1.2, this class was retrofitted to
 107  * implement the {@link Map} interface, making it a member of the
 108  * <a href="{@docRoot}/../technotes/guides/collections/index.html">
 109  *
 110  * Java Collections Framework</a>.  Unlike the new collection
 111  * implementations, {@code Hashtable} is synchronized.  If a
 112  * thread-safe implementation is not needed, it is recommended to use
 113  * {@link HashMap} in place of {@code Hashtable}.  If a thread-safe
 114  * highly-concurrent implementation is desired, then it is recommended
 115  * to use {@link java.util.concurrent.ConcurrentHashMap} in place of
 116  * {@code Hashtable}.
 117  *
 118  * @author  Arthur van Hoff
 119  * @author  Josh Bloch
 120  * @author  Neal Gafter
 121  * @see     Object#equals(java.lang.Object)
 122  * @see     Object#hashCode()
 123  * @see     Hashtable#rehash()
 124  * @see     Collection
 125  * @see     Map
 126  * @see     HashMap
 127  * @see     TreeMap
 128  * @since JDK1.0
 129  */
 130 public class Hashtable<K,V>
 131     extends Dictionary<K,V>
 132     implements Map<K,V>, Cloneable, java.io.Serializable {
 133 
 134     /**
 135      * The hash table data.
 136      */
 137     private transient Entry<?,?>[] table;
 138 
 139     /**
 140      * The total number of entries in the hash table.
 141      */
 142     private transient int count;
 143 
 144     /**
 145      * The table is rehashed when its size exceeds this threshold.  (The
 146      * value of this field is (int)(capacity * loadFactor).)
 147      *
 148      * @serial
 149      */
 150     private int threshold;
 151 
 152     /**
 153      * The load factor for the hashtable.
 154      *
 155      * @serial
 156      */
 157     private float loadFactor;
 158 
 159     /**
 160      * The number of times this Hashtable has been structurally modified
 161      * Structural modifications are those that change the number of entries in
 162      * the Hashtable or otherwise modify its internal structure (e.g.,
 163      * rehash).  This field is used to make iterators on Collection-views of
 164      * the Hashtable fail-fast.  (See ConcurrentModificationException).
 165      */
 166     private transient int modCount = 0;
 167 
 168     /** use serialVersionUID from JDK 1.0.2 for interoperability */
 169     private static final long serialVersionUID = 1421746759512286392L;
 170 
 171     /**
 172      * Constructs a new, empty hashtable with the specified initial
 173      * capacity and the specified load factor.
 174      *
 175      * @param      initialCapacity   the initial capacity of the hashtable.
 176      * @param      loadFactor        the load factor of the hashtable.
 177      * @exception  IllegalArgumentException  if the initial capacity is less
 178      *             than zero, or if the load factor is nonpositive.
 179      */
 180     public Hashtable(int initialCapacity, float loadFactor) {
 181         if (initialCapacity < 0)
 182             throw new IllegalArgumentException("Illegal Capacity: "+
 183                                                initialCapacity);
 184         if (loadFactor <= 0 || Float.isNaN(loadFactor))
 185             throw new IllegalArgumentException("Illegal Load: "+loadFactor);
 186 
 187         if (initialCapacity==0)
 188             initialCapacity = 1;
 189         this.loadFactor = loadFactor;
 190         table = new Entry<?,?>[initialCapacity];
 191         threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
 192     }
 193 
 194     /**
 195      * Constructs a new, empty hashtable with the specified initial capacity
 196      * and default load factor (0.75).
 197      *
 198      * @param     initialCapacity   the initial capacity of the hashtable.
 199      * @exception IllegalArgumentException if the initial capacity is less
 200      *              than zero.
 201      */
 202     public Hashtable(int initialCapacity) {
 203         this(initialCapacity, 0.75f);
 204     }
 205 
 206     /**
 207      * Constructs a new, empty hashtable with a default initial capacity (11)
 208      * and load factor (0.75).
 209      */
 210     public Hashtable() {
 211         this(11, 0.75f);
 212     }
 213 
 214     /**
 215      * Constructs a new hashtable with the same mappings as the given
 216      * Map.  The hashtable is created with an initial capacity sufficient to
 217      * hold the mappings in the given Map and a default load factor (0.75).
 218      *
 219      * @param t the map whose mappings are to be placed in this map.
 220      * @throws NullPointerException if the specified map is null.
 221      * @since   1.2
 222      */
 223     public Hashtable(Map<? extends K, ? extends V> t) {
 224         this(Math.max(2*t.size(), 11), 0.75f);
 225         putAll(t);
 226     }
 227 
 228     /**
 229      * Returns the number of keys in this hashtable.
 230      *
 231      * @return  the number of keys in this hashtable.
 232      */
 233     public synchronized int size() {
 234         return count;
 235     }
 236 
 237     /**
 238      * Tests if this hashtable maps no keys to values.
 239      *
 240      * @return  <code>true</code> if this hashtable maps no keys to values;
 241      *          <code>false</code> otherwise.
 242      */
 243     public synchronized boolean isEmpty() {
 244         return count == 0;
 245     }
 246 
 247     /**
 248      * Returns an enumeration of the keys in this hashtable.
 249      *
 250      * @return  an enumeration of the keys in this hashtable.
 251      * @see     Enumeration
 252      * @see     #elements()
 253      * @see     #keySet()
 254      * @see     Map
 255      */
 256     public synchronized Enumeration<K> keys() {
 257         return this.<K>getEnumeration(KEYS);
 258     }
 259 
 260     /**
 261      * Returns an enumeration of the values in this hashtable.
 262      * Use the Enumeration methods on the returned object to fetch the elements
 263      * sequentially.
 264      *
 265      * @return  an enumeration of the values in this hashtable.
 266      * @see     java.util.Enumeration
 267      * @see     #keys()
 268      * @see     #values()
 269      * @see     Map
 270      */
 271     public synchronized Enumeration<V> elements() {
 272         return this.<V>getEnumeration(VALUES);
 273     }
 274 
 275     /**
 276      * Tests if some key maps into the specified value in this hashtable.
 277      * This operation is more expensive than the {@link #containsKey
 278      * containsKey} method.
 279      *
 280      * <p>Note that this method is identical in functionality to
 281      * {@link #containsValue containsValue}, (which is part of the
 282      * {@link Map} interface in the collections framework).
 283      *
 284      * @param      value   a value to search for
 285      * @return     <code>true</code> if and only if some key maps to the
 286      *             <code>value</code> argument in this hashtable as
 287      *             determined by the <tt>equals</tt> method;
 288      *             <code>false</code> otherwise.
 289      * @exception  NullPointerException  if the value is <code>null</code>
 290      */
 291     public synchronized boolean contains(Object value) {
 292         if (value == null) {
 293             throw new NullPointerException();
 294         }
 295 
 296         Entry<?,?> tab[] = table;
 297         for (int i = tab.length ; i-- > 0 ;) {
 298             for (Entry<?,?> e = tab[i] ; e != null ; e = e.next) {
 299                 if (e.value.equals(value)) {
 300                     return true;
 301                 }
 302             }
 303         }
 304         return false;
 305     }
 306 
 307     /**
 308      * Returns true if this hashtable maps one or more keys to this value.
 309      *
 310      * <p>Note that this method is identical in functionality to {@link
 311      * #contains contains} (which predates the {@link Map} interface).
 312      *
 313      * @param value value whose presence in this hashtable is to be tested
 314      * @return <tt>true</tt> if this map maps one or more keys to the
 315      *         specified value
 316      * @throws NullPointerException  if the value is <code>null</code>
 317      * @since 1.2
 318      */
 319     public boolean containsValue(Object value) {
 320         return contains(value);
 321     }
 322 
 323     /**
 324      * Tests if the specified object is a key in this hashtable.
 325      *
 326      * @param   key   possible key
 327      * @return  <code>true</code> if and only if the specified object
 328      *          is a key in this hashtable, as determined by the
 329      *          <tt>equals</tt> method; <code>false</code> otherwise.
 330      * @throws  NullPointerException  if the key is <code>null</code>
 331      * @see     #contains(Object)
 332      */
 333     public synchronized boolean containsKey(Object key) {
 334         Entry<?,?> tab[] = table;
 335         int hash = key.hashCode();
 336         int index = (hash & 0x7FFFFFFF) % tab.length;
 337         for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
 338             if ((e.hash == hash) && e.key.equals(key)) {
 339                 return true;
 340             }
 341         }
 342         return false;
 343     }
 344 
 345     /**
 346      * Returns the value to which the specified key is mapped,
 347      * or {@code null} if this map contains no mapping for the key.
 348      *
 349      * <p>More formally, if this map contains a mapping from a key
 350      * {@code k} to a value {@code v} such that {@code (key.equals(k))},
 351      * then this method returns {@code v}; otherwise it returns
 352      * {@code null}.  (There can be at most one such mapping.)
 353      *
 354      * @param key the key whose associated value is to be returned
 355      * @return the value to which the specified key is mapped, or
 356      *         {@code null} if this map contains no mapping for the key
 357      * @throws NullPointerException if the specified key is null
 358      * @see     #put(Object, Object)
 359      */
 360     @SuppressWarnings("unchecked")
 361     public synchronized V get(Object key) {
 362         Entry<?,?> tab[] = table;
 363         int hash = key.hashCode();
 364         int index = (hash & 0x7FFFFFFF) % tab.length;
 365         for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
 366             if ((e.hash == hash) && e.key.equals(key)) {
 367                 return (V)e.value;
 368             }
 369         }
 370         return null;
 371     }
 372 
 373     /**
 374      * The maximum size of array to allocate.
 375      * Some VMs reserve some header words in an array.
 376      * Attempts to allocate larger arrays may result in
 377      * OutOfMemoryError: Requested array size exceeds VM limit
 378      */
 379     private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
 380 
 381     /**
 382      * Increases the capacity of and internally reorganizes this
 383      * hashtable, in order to accommodate and access its entries more
 384      * efficiently.  This method is called automatically when the
 385      * number of keys in the hashtable exceeds this hashtable's capacity
 386      * and load factor.
 387      */
 388     @SuppressWarnings("unchecked")
 389     protected void rehash() {
 390         int oldCapacity = table.length;
 391         Entry<?,?>[] oldMap = table;
 392 
 393         // overflow-conscious code
 394         int newCapacity = (oldCapacity << 1) + 1;
 395         if (newCapacity - MAX_ARRAY_SIZE > 0) {
 396             if (oldCapacity == MAX_ARRAY_SIZE)
 397                 // Keep running with MAX_ARRAY_SIZE buckets
 398                 return;
 399             newCapacity = MAX_ARRAY_SIZE;
 400         }
 401         Entry<?,?>[] newMap = new Entry<?,?>[newCapacity];
 402 
 403         modCount++;
 404         threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
 405         table = newMap;
 406 
 407         for (int i = oldCapacity ; i-- > 0 ;) {
 408             for (Entry<K,V> old = (Entry<K,V>)oldMap[i] ; old != null ; ) {
 409                 Entry<K,V> e = old;
 410                 old = old.next;
 411 
 412                 int index = (e.hash & 0x7FFFFFFF) % newCapacity;
 413                 e.next = (Entry<K,V>)newMap[index];
 414                 newMap[index] = e;
 415             }
 416         }
 417     }
 418 
 419     private void addEntry(int hash, K key, V value, int index) {
 420         modCount++;
 421 
 422         Entry<?,?> tab[] = table;
 423         if (count >= threshold) {
 424             // Rehash the table if the threshold is exceeded
 425             rehash();
 426 
 427             tab = table;
 428             hash = key.hashCode();
 429             index = (hash & 0x7FFFFFFF) % tab.length;
 430         }
 431 
 432         // Creates the new entry.
 433         @SuppressWarnings("unchecked")
 434         Entry<K,V> e = (Entry<K,V>) tab[index];
 435         tab[index] = new Entry<>(hash, key, value, e);
 436         count++;
 437     }
 438 
 439     /**
 440      * Maps the specified <code>key</code> to the specified
 441      * <code>value</code> in this hashtable. Neither the key nor the
 442      * value can be <code>null</code>. <p>
 443      *
 444      * The value can be retrieved by calling the <code>get</code> method
 445      * with a key that is equal to the original key.
 446      *
 447      * @param      key     the hashtable key
 448      * @param      value   the value
 449      * @return     the previous value of the specified key in this hashtable,
 450      *             or <code>null</code> if it did not have one
 451      * @exception  NullPointerException  if the key or value is
 452      *               <code>null</code>
 453      * @see     Object#equals(Object)
 454      * @see     #get(Object)
 455      */
 456     public synchronized V put(K key, V value) {
 457         // Make sure the value is not null
 458         if (value == null) {
 459             throw new NullPointerException();
 460         }
 461 
 462         // Makes sure the key is not already in the hashtable.
 463         Entry<?,?> tab[] = table;
 464         int hash = key.hashCode();
 465         int index = (hash & 0x7FFFFFFF) % tab.length;
 466         @SuppressWarnings("unchecked")
 467         Entry<K,V> entry = (Entry<K,V>)tab[index];
 468         for(; entry != null ; entry = entry.next) {
 469             if ((entry.hash == hash) && entry.key.equals(key)) {
 470                 V old = entry.value;
 471                 entry.value = value;
 472                 return old;
 473             }
 474         }
 475 
 476         addEntry(hash, key, value, index);
 477         return null;
 478     }
 479 
 480     /**
 481      * Removes the key (and its corresponding value) from this
 482      * hashtable. This method does nothing if the key is not in the hashtable.
 483      *
 484      * @param   key   the key that needs to be removed
 485      * @return  the value to which the key had been mapped in this hashtable,
 486      *          or <code>null</code> if the key did not have a mapping
 487      * @throws  NullPointerException  if the key is <code>null</code>
 488      */
 489     public synchronized V remove(Object key) {
 490         Entry<?,?> tab[] = table;
 491         int hash = key.hashCode();
 492         int index = (hash & 0x7FFFFFFF) % tab.length;
 493         @SuppressWarnings("unchecked")
 494         Entry<K,V> e = (Entry<K,V>)tab[index];
 495         for(Entry<K,V> prev = null ; e != null ; prev = e, e = e.next) {
 496             if ((e.hash == hash) && e.key.equals(key)) {
 497                 modCount++;
 498                 if (prev != null) {
 499                     prev.next = e.next;
 500                 } else {
 501                     tab[index] = e.next;
 502                 }
 503                 count--;
 504                 V oldValue = e.value;
 505                 e.value = null;
 506                 return oldValue;
 507             }
 508         }
 509         return null;
 510     }
 511 
 512     /**
 513      * Copies all of the mappings from the specified map to this hashtable.
 514      * These mappings will replace any mappings that this hashtable had for any
 515      * of the keys currently in the specified map.
 516      *
 517      * @param t mappings to be stored in this map
 518      * @throws NullPointerException if the specified map is null
 519      * @since 1.2
 520      */
 521     public synchronized void putAll(Map<? extends K, ? extends V> t) {
 522         for (Map.Entry<? extends K, ? extends V> e : t.entrySet())
 523             put(e.getKey(), e.getValue());
 524     }
 525 
 526     /**
 527      * Clears this hashtable so that it contains no keys.
 528      */
 529     public synchronized void clear() {
 530         Entry<?,?> tab[] = table;
 531         modCount++;
 532         for (int index = tab.length; --index >= 0; )
 533             tab[index] = null;
 534         count = 0;
 535     }
 536 
 537     /**
 538      * Creates a shallow copy of this hashtable. All the structure of the
 539      * hashtable itself is copied, but the keys and values are not cloned.
 540      * This is a relatively expensive operation.
 541      *
 542      * @return  a clone of the hashtable
 543      */
 544     public synchronized Object clone() {
 545         try {
 546             Hashtable<?,?> t = (Hashtable<?,?>)super.clone();
 547             t.table = new Entry<?,?>[table.length];
 548             for (int i = table.length ; i-- > 0 ; ) {
 549                 t.table[i] = (table[i] != null)
 550                     ? (Entry<?,?>) table[i].clone() : null;
 551             }
 552             t.keySet = null;
 553             t.entrySet = null;
 554             t.values = null;
 555             t.modCount = 0;
 556             return t;
 557         } catch (CloneNotSupportedException e) {
 558             // this shouldn't happen, since we are Cloneable
 559             throw new InternalError(e);
 560         }
 561     }
 562 
 563     /**
 564      * Returns a string representation of this <tt>Hashtable</tt> object
 565      * in the form of a set of entries, enclosed in braces and separated
 566      * by the ASCII characters "<tt>,&nbsp;</tt>" (comma and space). Each
 567      * entry is rendered as the key, an equals sign <tt>=</tt>, and the
 568      * associated element, where the <tt>toString</tt> method is used to
 569      * convert the key and element to strings.
 570      *
 571      * @return  a string representation of this hashtable
 572      */
 573     public synchronized String toString() {
 574         int max = size() - 1;
 575         if (max == -1)
 576             return "{}";
 577 
 578         StringBuilder sb = new StringBuilder();
 579         Iterator<Map.Entry<K,V>> it = entrySet().iterator();
 580 
 581         sb.append('{');
 582         for (int i = 0; ; i++) {
 583             Map.Entry<K,V> e = it.next();
 584             K key = e.getKey();
 585             V value = e.getValue();
 586             sb.append(key   == this ? "(this Map)" : key.toString());
 587             sb.append('=');
 588             sb.append(value == this ? "(this Map)" : value.toString());
 589 
 590             if (i == max)
 591                 return sb.append('}').toString();
 592             sb.append(", ");
 593         }
 594     }
 595 
 596 
 597     private <T> Enumeration<T> getEnumeration(int type) {
 598         if (count == 0) {
 599             return Collections.emptyEnumeration();
 600         } else {
 601             return new Enumerator<>(type, false);
 602         }
 603     }
 604 
 605     private <T> Iterator<T> getIterator(int type) {
 606         if (count == 0) {
 607             return Collections.emptyIterator();
 608         } else {
 609             return new Enumerator<>(type, true);
 610         }
 611     }
 612 
 613     // Views
 614 
 615     /**
 616      * Each of these fields are initialized to contain an instance of the
 617      * appropriate view the first time this view is requested.  The views are
 618      * stateless, so there's no reason to create more than one of each.
 619      */
 620     private transient volatile Set<K> keySet;
 621     private transient volatile Set<Map.Entry<K,V>> entrySet;
 622     private transient volatile Collection<V> values;
 623 
 624     /**
 625      * Returns a {@link Set} view of the keys contained in this map.
 626      * The set is backed by the map, so changes to the map are
 627      * reflected in the set, and vice-versa.  If the map is modified
 628      * while an iteration over the set is in progress (except through
 629      * the iterator's own <tt>remove</tt> operation), the results of
 630      * the iteration are undefined.  The set supports element removal,
 631      * which removes the corresponding mapping from the map, via the
 632      * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
 633      * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
 634      * operations.  It does not support the <tt>add</tt> or <tt>addAll</tt>
 635      * operations.
 636      *
 637      * @since 1.2
 638      */
 639     public Set<K> keySet() {
 640         if (keySet == null)
 641             keySet = Collections.synchronizedSet(new KeySet(), this);
 642         return keySet;
 643     }
 644 
 645     private class KeySet extends AbstractSet<K> {
 646         public Iterator<K> iterator() {
 647             return getIterator(KEYS);
 648         }
 649         public int size() {
 650             return count;
 651         }
 652         public boolean contains(Object o) {
 653             return containsKey(o);
 654         }
 655         public boolean remove(Object o) {
 656             return Hashtable.this.remove(o) != null;
 657         }
 658         public void clear() {
 659             Hashtable.this.clear();
 660         }
 661     }
 662 
 663     /**
 664      * Returns a {@link Set} view of the mappings contained in this map.
 665      * The set is backed by the map, so changes to the map are
 666      * reflected in the set, and vice-versa.  If the map is modified
 667      * while an iteration over the set is in progress (except through
 668      * the iterator's own <tt>remove</tt> operation, or through the
 669      * <tt>setValue</tt> operation on a map entry returned by the
 670      * iterator) the results of the iteration are undefined.  The set
 671      * supports element removal, which removes the corresponding
 672      * mapping from the map, via the <tt>Iterator.remove</tt>,
 673      * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
 674      * <tt>clear</tt> operations.  It does not support the
 675      * <tt>add</tt> or <tt>addAll</tt> operations.
 676      *
 677      * @since 1.2
 678      */
 679     public Set<Map.Entry<K,V>> entrySet() {
 680         if (entrySet==null)
 681             entrySet = Collections.synchronizedSet(new EntrySet(), this);
 682         return entrySet;
 683     }
 684 
 685     private class EntrySet extends AbstractSet<Map.Entry<K,V>> {
 686         public Iterator<Map.Entry<K,V>> iterator() {
 687             return getIterator(ENTRIES);
 688         }
 689 
 690         public boolean add(Map.Entry<K,V> o) {
 691             return super.add(o);
 692         }
 693 
 694         public boolean contains(Object o) {
 695             if (!(o instanceof Map.Entry))
 696                 return false;
 697             Map.Entry<?,?> entry = (Map.Entry<?,?>)o;
 698             Object key = entry.getKey();
 699             Entry<?,?>[] tab = table;
 700             int hash = key.hashCode();
 701             int index = (hash & 0x7FFFFFFF) % tab.length;
 702 
 703             for (Entry<?,?> e = tab[index]; e != null; e = e.next)
 704                 if (e.hash==hash && e.equals(entry))
 705                     return true;
 706             return false;
 707         }
 708 
 709         public boolean remove(Object o) {
 710             if (!(o instanceof Map.Entry))
 711                 return false;
 712             Map.Entry<?,?> entry = (Map.Entry<?,?>) o;
 713             Object key = entry.getKey();
 714             Entry<?,?>[] tab = table;
 715             int hash = key.hashCode();
 716             int index = (hash & 0x7FFFFFFF) % tab.length;
 717 
 718             @SuppressWarnings("unchecked")
 719             Entry<K,V> e = (Entry<K,V>)tab[index];
 720             for(Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
 721                 if (e.hash==hash && e.equals(entry)) {
 722                     modCount++;
 723                     if (prev != null)
 724                         prev.next = e.next;
 725                     else
 726                         tab[index] = e.next;
 727 
 728                     count--;
 729                     e.value = null;
 730                     return true;
 731                 }
 732             }
 733             return false;
 734         }
 735 
 736         public int size() {
 737             return count;
 738         }
 739 
 740         public void clear() {
 741             Hashtable.this.clear();
 742         }
 743     }
 744 
 745     /**
 746      * Returns a {@link Collection} view of the values contained in this map.
 747      * The collection is backed by the map, so changes to the map are
 748      * reflected in the collection, and vice-versa.  If the map is
 749      * modified while an iteration over the collection is in progress
 750      * (except through the iterator's own <tt>remove</tt> operation),
 751      * the results of the iteration are undefined.  The collection
 752      * supports element removal, which removes the corresponding
 753      * mapping from the map, via the <tt>Iterator.remove</tt>,
 754      * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
 755      * <tt>retainAll</tt> and <tt>clear</tt> operations.  It does not
 756      * support the <tt>add</tt> or <tt>addAll</tt> operations.
 757      *
 758      * @since 1.2
 759      */
 760     public Collection<V> values() {
 761         if (values==null)
 762             values = Collections.synchronizedCollection(new ValueCollection(),
 763                                                         this);
 764         return values;
 765     }
 766 
 767     private class ValueCollection extends AbstractCollection<V> {
 768         public Iterator<V> iterator() {
 769             return getIterator(VALUES);
 770         }
 771         public int size() {
 772             return count;
 773         }
 774         public boolean contains(Object o) {
 775             return containsValue(o);
 776         }
 777         public void clear() {
 778             Hashtable.this.clear();
 779         }
 780     }
 781 
 782     // Comparison and hashing
 783 
 784     /**
 785      * Compares the specified Object with this Map for equality,
 786      * as per the definition in the Map interface.
 787      *
 788      * @param  o object to be compared for equality with this hashtable
 789      * @return true if the specified Object is equal to this Map
 790      * @see Map#equals(Object)
 791      * @since 1.2
 792      */
 793     public synchronized boolean equals(Object o) {
 794         if (o == this)
 795             return true;
 796 
 797         if (!(o instanceof Map))
 798             return false;
 799         Map<?,?> t = (Map<?,?>) o;
 800         if (t.size() != size())
 801             return false;
 802 
 803         try {
 804             Iterator<Map.Entry<K,V>> i = entrySet().iterator();
 805             while (i.hasNext()) {
 806                 Map.Entry<K,V> e = i.next();
 807                 K key = e.getKey();
 808                 V value = e.getValue();
 809                 if (value == null) {
 810                     if (!(t.get(key)==null && t.containsKey(key)))
 811                         return false;
 812                 } else {
 813                     if (!value.equals(t.get(key)))
 814                         return false;
 815                 }
 816             }
 817         } catch (ClassCastException unused)   {
 818             return false;
 819         } catch (NullPointerException unused) {
 820             return false;
 821         }
 822 
 823         return true;
 824     }
 825 
 826     /**
 827      * Returns the hash code value for this Map as per the definition in the
 828      * Map interface.
 829      *
 830      * @see Map#hashCode()
 831      * @since 1.2
 832      */
 833     public synchronized int hashCode() {
 834         /*
 835          * This code detects the recursion caused by computing the hash code
 836          * of a self-referential hash table and prevents the stack overflow
 837          * that would otherwise result.  This allows certain 1.1-era
 838          * applets with self-referential hash tables to work.  This code
 839          * abuses the loadFactor field to do double-duty as a hashCode
 840          * in progress flag, so as not to worsen the space performance.
 841          * A negative load factor indicates that hash code computation is
 842          * in progress.
 843          */
 844         int h = 0;
 845         if (count == 0 || loadFactor < 0)
 846             return h;  // Returns zero
 847 
 848         loadFactor = -loadFactor;  // Mark hashCode computation in progress
 849         Entry<?,?>[] tab = table;
 850         for (Entry<?,?> entry : tab) {
 851             while (entry != null) {
 852                 h += entry.hashCode();
 853                 entry = entry.next;
 854             }
 855         }
 856 
 857         loadFactor = -loadFactor;  // Mark hashCode computation complete
 858 
 859         return h;
 860     }
 861 
 862     @Override
 863     public synchronized V getOrDefault(Object key, V defaultValue) {
 864         V result = get(key);
 865         return (null == result) ? defaultValue : result;
 866     }
 867 
 868     @SuppressWarnings("unchecked")
 869     @Override
 870     public synchronized void forEach(BiConsumer<? super K, ? super V> action) {
 871         Objects.requireNonNull(action);     // explicit check required in case
 872                                             // table is empty.
 873         final int expectedModCount = modCount;
 874 
 875         Entry<?, ?>[] tab = table;
 876         for (Entry<?, ?> entry : tab) {
 877             while (entry != null) {
 878                 action.accept((K)entry.key, (V)entry.value);
 879                 entry = entry.next;
 880 
 881                 if (expectedModCount != modCount) {
 882                     throw new ConcurrentModificationException();
 883                 }
 884             }
 885         }
 886     }
 887 
 888     @SuppressWarnings("unchecked")
 889     @Override
 890     public synchronized void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
 891         Objects.requireNonNull(function);     // explicit check required in case
 892                                               // table is empty.
 893         final int expectedModCount = modCount;
 894 
 895         Entry<K, V>[] tab = (Entry<K, V>[])table;
 896         for (Entry<K, V> entry : tab) {
 897             while (entry != null) {
 898                 entry.value = Objects.requireNonNull(
 899                     function.apply(entry.key, entry.value));
 900                 entry = entry.next;
 901 
 902                 if (expectedModCount != modCount) {
 903                     throw new ConcurrentModificationException();
 904                 }
 905             }
 906         }
 907     }
 908 
 909     @Override
 910     public synchronized V putIfAbsent(K key, V value) {
 911         Objects.requireNonNull(value);
 912 
 913         // Makes sure the key is not already in the hashtable.
 914         Entry<?,?> tab[] = table;
 915         int hash = key.hashCode();
 916         int index = (hash & 0x7FFFFFFF) % tab.length;
 917         @SuppressWarnings("unchecked")
 918         Entry<K,V> entry = (Entry<K,V>)tab[index];
 919         for (; entry != null; entry = entry.next) {
 920             if ((entry.hash == hash) && entry.key.equals(key)) {
 921                 V old = entry.value;
 922                 if (old == null) {
 923                     entry.value = value;
 924                 }
 925                 return old;
 926             }
 927         }
 928 
 929         addEntry(hash, key, value, index);
 930         return null;
 931     }
 932 
 933     @Override
 934     public synchronized boolean remove(Object key, Object value) {
 935         Objects.requireNonNull(value);
 936 
 937         Entry<?,?> tab[] = table;
 938         int hash = key.hashCode();
 939         int index = (hash & 0x7FFFFFFF) % tab.length;
 940         @SuppressWarnings("unchecked")
 941         Entry<K,V> e = (Entry<K,V>)tab[index];
 942         for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
 943             if ((e.hash == hash) && e.key.equals(key) && e.value.equals(value)) {
 944                 modCount++;
 945                 if (prev != null) {
 946                     prev.next = e.next;
 947                 } else {
 948                     tab[index] = e.next;
 949                 }
 950                 count--;
 951                 e.value = null;
 952                 return true;
 953             }
 954         }
 955         return false;
 956     }
 957 
 958     @Override
 959     public synchronized boolean replace(K key, V oldValue, V newValue) {
 960         Objects.requireNonNull(oldValue);
 961         Objects.requireNonNull(newValue);
 962         Entry<?,?> tab[] = table;
 963         int hash = key.hashCode();
 964         int index = (hash & 0x7FFFFFFF) % tab.length;
 965         @SuppressWarnings("unchecked")
 966         Entry<K,V> e = (Entry<K,V>)tab[index];
 967         for (; e != null; e = e.next) {
 968             if ((e.hash == hash) && e.key.equals(key)) {
 969                 if (e.value.equals(oldValue)) {
 970                     e.value = newValue;
 971                     return true;
 972                 } else {
 973                     return false;
 974                 }
 975             }
 976         }
 977         return false;
 978     }
 979 
 980     @Override
 981     public synchronized V replace(K key, V value) {
 982         Objects.requireNonNull(value);
 983         Entry<?,?> tab[] = table;
 984         int hash = key.hashCode();
 985         int index = (hash & 0x7FFFFFFF) % tab.length;
 986         @SuppressWarnings("unchecked")
 987         Entry<K,V> e = (Entry<K,V>)tab[index];
 988         for (; e != null; e = e.next) {
 989             if ((e.hash == hash) && e.key.equals(key)) {
 990                 V oldValue = e.value;
 991                 e.value = value;
 992                 return oldValue;
 993             }
 994         }
 995         return null;
 996     }
 997 
 998     @Override
 999     public synchronized V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
1000         Objects.requireNonNull(mappingFunction);
1001 
1002         Entry<?,?> tab[] = table;
1003         int hash = key.hashCode();
1004         int index = (hash & 0x7FFFFFFF) % tab.length;
1005         @SuppressWarnings("unchecked")
1006         Entry<K,V> e = (Entry<K,V>)tab[index];
1007         for (; e != null; e = e.next) {
1008             if (e.hash == hash && e.key.equals(key)) {
1009                 // Hashtable not accept null value
1010                 return e.value;
1011             }
1012         }
1013 
1014         V newValue = mappingFunction.apply(key);
1015         if (newValue != null) {
1016             addEntry(hash, key, newValue, index);
1017         }
1018 
1019         return newValue;
1020     }
1021 
1022     @Override
1023     public synchronized V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
1024         Objects.requireNonNull(remappingFunction);
1025 
1026         Entry<?,?> tab[] = table;
1027         int hash = key.hashCode();
1028         int index = (hash & 0x7FFFFFFF) % tab.length;
1029         @SuppressWarnings("unchecked")
1030         Entry<K,V> e = (Entry<K,V>)tab[index];
1031         for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
1032             if (e.hash == hash && e.key.equals(key)) {
1033                 V newValue = remappingFunction.apply(key, e.value);
1034                 if (newValue == null) {
1035                     modCount++;
1036                     if (prev != null) {
1037                         prev.next = e.next;
1038                     } else {
1039                         tab[index] = e.next;
1040                     }
1041                     count--;
1042                 } else {
1043                     e.value = newValue;
1044                 }
1045                 return newValue;
1046             }
1047         }
1048         return null;
1049     }
1050 
1051     @Override
1052     public synchronized V compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
1053         Objects.requireNonNull(remappingFunction);
1054 
1055         Entry<?,?> tab[] = table;
1056         int hash = key.hashCode();
1057         int index = (hash & 0x7FFFFFFF) % tab.length;
1058         @SuppressWarnings("unchecked")
1059         Entry<K,V> e = (Entry<K,V>)tab[index];
1060         for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
1061             if (e.hash == hash && Objects.equals(e.key, key)) {
1062                 V newValue = remappingFunction.apply(key, e.value);
1063                 if (newValue == null) {
1064                     modCount++;
1065                     if (prev != null) {
1066                         prev.next = e.next;
1067                     } else {
1068                         tab[index] = e.next;
1069                     }
1070                     count--;
1071                 } else {
1072                     e.value = newValue;
1073                 }
1074                 return newValue;
1075             }
1076         }
1077 
1078         V newValue = remappingFunction.apply(key, null);
1079         if (newValue != null) {
1080             addEntry(hash, key, newValue, index);
1081         }
1082 
1083         return newValue;
1084     }
1085 
1086     @Override
1087     public synchronized V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
1088         Objects.requireNonNull(remappingFunction);
1089 
1090         Entry<?,?> tab[] = table;
1091         int hash = key.hashCode();
1092         int index = (hash & 0x7FFFFFFF) % tab.length;
1093         @SuppressWarnings("unchecked")
1094         Entry<K,V> e = (Entry<K,V>)tab[index];
1095         for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
1096             if (e.hash == hash && e.key.equals(key)) {
1097                 V newValue = remappingFunction.apply(e.value, value);
1098                 if (newValue == null) {
1099                     modCount++;
1100                     if (prev != null) {
1101                         prev.next = e.next;
1102                     } else {
1103                         tab[index] = e.next;
1104                     }
1105                     count--;
1106                 } else {
1107                     e.value = newValue;
1108                 }
1109                 return newValue;
1110             }
1111         }
1112 
1113         if (value != null) {
1114             addEntry(hash, key, value, index);
1115         }
1116 
1117         return value;
1118     }
1119 
1120     /**
1121      * Save the state of the Hashtable to a stream (i.e., serialize it).
1122      *
1123      * @serialData The <i>capacity</i> of the Hashtable (the length of the
1124      *             bucket array) is emitted (int), followed by the
1125      *             <i>size</i> of the Hashtable (the number of key-value
1126      *             mappings), followed by the key (Object) and value (Object)
1127      *             for each key-value mapping represented by the Hashtable
1128      *             The key-value mappings are emitted in no particular order.
1129      */
1130     private void writeObject(java.io.ObjectOutputStream s)
1131             throws IOException {
1132         Entry<Object, Object> entryStack = null;
1133 
1134         synchronized (this) {
1135             // Write out the length, threshold, loadfactor
1136             s.defaultWriteObject();
1137 
1138             // Write out length, count of elements
1139             s.writeInt(table.length);
1140             s.writeInt(count);
1141 
1142             // Stack copies of the entries in the table
1143             for (int index = 0; index < table.length; index++) {
1144                 Entry<?,?> entry = table[index];
1145 
1146                 while (entry != null) {
1147                     entryStack =
1148                         new Entry<>(0, entry.key, entry.value, entryStack);
1149                     entry = entry.next;
1150                 }
1151             }
1152         }
1153 
1154         // Write out the key/value objects from the stacked entries
1155         while (entryStack != null) {
1156             s.writeObject(entryStack.key);
1157             s.writeObject(entryStack.value);
1158             entryStack = entryStack.next;
1159         }
1160     }
1161 
1162     /**
1163      * Reconstitute the Hashtable from a stream (i.e., deserialize it).
1164      */
1165     private void readObject(java.io.ObjectInputStream s)
1166          throws IOException, ClassNotFoundException
1167     {
1168         // Read in the length, threshold, and loadfactor
1169         s.defaultReadObject();
1170 
1171         // Read the original length of the array and number of elements
1172         int origlength = s.readInt();
1173         int elements = s.readInt();
1174 
1175         // Compute new size with a bit of room 5% to grow but
1176         // no larger than the original size.  Make the length
1177         // odd if it's large enough, this helps distribute the entries.
1178         // Guard against the length ending up zero, that's not valid.
1179         int length = (int)(elements * loadFactor) + (elements / 20) + 3;
1180         if (length > elements && (length & 1) == 0)
1181             length--;
1182         if (origlength > 0 && length > origlength)
1183             length = origlength;
1184         table = new Entry<?,?>[length];
1185         threshold = (int)Math.min(length * loadFactor, MAX_ARRAY_SIZE + 1);
1186         count = 0;
1187 
1188         // Read the number of elements and then all the key/value objects
1189         for (; elements > 0; elements--) {
1190             @SuppressWarnings("unchecked")
1191                 K key = (K)s.readObject();
1192             @SuppressWarnings("unchecked")
1193                 V value = (V)s.readObject();
1194             // synch could be eliminated for performance
1195             reconstitutionPut(table, key, value);
1196         }
1197     }
1198 
1199     /**
1200      * The put method used by readObject. This is provided because put
1201      * is overridable and should not be called in readObject since the
1202      * subclass will not yet be initialized.
1203      *
1204      * <p>This differs from the regular put method in several ways. No
1205      * checking for rehashing is necessary since the number of elements
1206      * initially in the table is known. The modCount is not incremented
1207      * because we are creating a new instance. Also, no return value
1208      * is needed.
1209      */
1210     private void reconstitutionPut(Entry<?,?>[] tab, K key, V value)
1211         throws StreamCorruptedException
1212     {
1213         if (value == null) {
1214             throw new java.io.StreamCorruptedException();
1215         }
1216         // Makes sure the key is not already in the hashtable.
1217         // This should not happen in deserialized version.
1218         int hash = key.hashCode();
1219         int index = (hash & 0x7FFFFFFF) % tab.length;
1220         for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
1221             if ((e.hash == hash) && e.key.equals(key)) {
1222                 throw new java.io.StreamCorruptedException();
1223             }
1224         }
1225         // Creates the new entry.
1226         @SuppressWarnings("unchecked")
1227             Entry<K,V> e = (Entry<K,V>)tab[index];
1228         tab[index] = new Entry<>(hash, key, value, e);
1229         count++;
1230     }
1231 
1232     /**
1233      * Hashtable bucket collision list entry
1234      */
1235     private static class Entry<K,V> implements Map.Entry<K,V> {
1236         final int hash;
1237         final K key;
1238         V value;
1239         Entry<K,V> next;
1240 
1241         protected Entry(int hash, K key, V value, Entry<K,V> next) {
1242             this.hash = hash;
1243             this.key =  key;
1244             this.value = value;
1245             this.next = next;
1246         }
1247 
1248         @SuppressWarnings("unchecked")
1249         protected Object clone() {
1250             return new Entry<>(hash, key, value,
1251                                   (next==null ? null : (Entry<K,V>) next.clone()));
1252         }
1253 
1254         // Map.Entry Ops
1255 
1256         public K getKey() {
1257             return key;
1258         }
1259 
1260         public V getValue() {
1261             return value;
1262         }
1263 
1264         public V setValue(V value) {
1265             if (value == null)
1266                 throw new NullPointerException();
1267 
1268             V oldValue = this.value;
1269             this.value = value;
1270             return oldValue;
1271         }
1272 
1273         public boolean equals(Object o) {
1274             if (!(o instanceof Map.Entry))
1275                 return false;
1276             Map.Entry<?,?> e = (Map.Entry<?,?>)o;
1277 
1278             return (key==null ? e.getKey()==null : key.equals(e.getKey())) &&
1279                (value==null ? e.getValue()==null : value.equals(e.getValue()));
1280         }
1281 
1282         public int hashCode() {
1283             return hash ^ Objects.hashCode(value);
1284         }
1285 
1286         public String toString() {
1287             return key.toString()+"="+value.toString();
1288         }
1289     }
1290 
1291     // Types of Enumerations/Iterations
1292     private static final int KEYS = 0;
1293     private static final int VALUES = 1;
1294     private static final int ENTRIES = 2;
1295 
1296     /**
1297      * A hashtable enumerator class.  This class implements both the
1298      * Enumeration and Iterator interfaces, but individual instances
1299      * can be created with the Iterator methods disabled.  This is necessary
1300      * to avoid unintentionally increasing the capabilities granted a user
1301      * by passing an Enumeration.
1302      */
1303     private class Enumerator<T> implements Enumeration<T>, Iterator<T> {
1304         Entry<?,?>[] table = Hashtable.this.table;
1305         int index = table.length;
1306         Entry<?,?> entry;
1307         Entry<?,?> lastReturned;
1308         int type;
1309 
1310         /**
1311          * Indicates whether this Enumerator is serving as an Iterator
1312          * or an Enumeration.  (true -> Iterator).
1313          */
1314         boolean iterator;
1315 
1316         /**
1317          * The modCount value that the iterator believes that the backing
1318          * Hashtable should have.  If this expectation is violated, the iterator
1319          * has detected concurrent modification.
1320          */
1321         protected int expectedModCount = modCount;
1322 
1323         Enumerator(int type, boolean iterator) {
1324             this.type = type;
1325             this.iterator = iterator;
1326         }
1327 
1328         public boolean hasMoreElements() {
1329             Entry<?,?> e = entry;
1330             int i = index;
1331             Entry<?,?>[] t = table;
1332             /* Use locals for faster loop iteration */
1333             while (e == null && i > 0) {
1334                 e = t[--i];
1335             }
1336             entry = e;
1337             index = i;
1338             return e != null;
1339         }
1340 
1341         @SuppressWarnings("unchecked")
1342         public T nextElement() {
1343             Entry<?,?> et = entry;
1344             int i = index;
1345             Entry<?,?>[] t = table;
1346             /* Use locals for faster loop iteration */
1347             while (et == null && i > 0) {
1348                 et = t[--i];
1349             }
1350             entry = et;
1351             index = i;
1352             if (et != null) {
1353                 Entry<?,?> e = lastReturned = entry;
1354                 entry = e.next;
1355                 return type == KEYS ? (T)e.key : (type == VALUES ? (T)e.value : (T)e);
1356             }
1357             throw new NoSuchElementException("Hashtable Enumerator");
1358         }
1359 
1360         // Iterator methods
1361         public boolean hasNext() {
1362             return hasMoreElements();
1363         }
1364 
1365         public T next() {
1366             if (modCount != expectedModCount)
1367                 throw new ConcurrentModificationException();
1368             return nextElement();
1369         }
1370 
1371         public void remove() {
1372             if (!iterator)
1373                 throw new UnsupportedOperationException();
1374             if (lastReturned == null)
1375                 throw new IllegalStateException("Hashtable Enumerator");
1376             if (modCount != expectedModCount)
1377                 throw new ConcurrentModificationException();
1378 
1379             synchronized(Hashtable.this) {
1380                 Entry<?,?>[] tab = Hashtable.this.table;
1381                 int index = (lastReturned.hash & 0x7FFFFFFF) % tab.length;
1382 
1383                 @SuppressWarnings("unchecked")
1384                 Entry<K,V> e = (Entry<K,V>)tab[index];
1385                 for(Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
1386                     if (e == lastReturned) {
1387                         modCount++;
1388                         expectedModCount++;
1389                         if (prev == null)
1390                             tab[index] = e.next;
1391                         else
1392                             prev.next = e.next;
1393                         count--;
1394                         lastReturned = null;
1395                         return;
1396                     }
1397                 }
1398                 throw new ConcurrentModificationException();
1399             }
1400         }
1401     }
1402 }
jdk1.8的Hashtable

  测试程序:

 1 package com.hash.hashmaptest;
 2 import java.util.*;
 3 
 4 public class HashtableTest {
 5     
 6         public static void main(String[] args) {
 7             testHashtableAPIs();
 8         }
 9 
10         private static void testHashtableAPIs() {
11             // 初始化随机种子
12             Random r = new Random();
13             // 新建Hashtable
14             Hashtable table = new Hashtable();
15             // 添加操作
16             table.put("one", new Integer(r.nextInt(10)));
17             table.put("two", new Integer(r.nextInt(10)));
18             table.put("three", new Integer(r.nextInt(10)));
19 
20             // 打印出table
21             System.out.println("table:"+table );
22 
23             // 通过Iterator遍历key-value
24             Iterator iter = table.entrySet().iterator();
25             while(iter.hasNext()) {
26                 Map.Entry entry = (Map.Entry)iter.next();
27                 System.out.println("next : "+ entry.getKey() +" - "+entry.getValue());
28             }
29 
30             // Hashtable的键值对个数        
31             System.out.println("size:"+table.size());
32 
33             // containsKey(Object key) :是否包含键key
34             System.out.println("contains key two : "+table.containsKey("two"));
35             System.out.println("contains key five : "+table.containsKey("five"));
36 
37             // containsValue(Object value) :是否包含值value
38             System.out.println("contains value 0 : "+table.containsValue(new Integer(0)));
39 
40             // remove(Object key) : 删除键key对应的键值对
41             table.remove("three");
42 
43             System.out.println("table:"+table );
44 
45             // clear() : 清空Hashtable
46             table.clear();
47 
48             // isEmpty() : Hashtable是否为空
49             System.out.println((table.isEmpty()?"table is empty":"table is not empty") );
50         }
51 }

五、HashMap和Hashtable的对比

 5.1、产生的时间和作者

    HashTable产生于JDK 1.1,而HashMap产生于JDK 1.2。从时间的维度上来看,HashMap要比HashTable出现得晚一些。

 5.2、从方法层面分析

 

    两个类的继承体系有些不同,虽然都实现了Map、Cloneable、Serializable三个接口。但是HashMap继承自抽象类AbstractMap,而HashTable继承自抽象类Dictionary。其中Dictionary类是一个已经被废弃的类。

    同时Hashtable比HashMap多了两个公开方法。一个是elements,这来自于抽象类Dictionary,鉴于该类已经废弃,所以这个方法也就没什么用处了。另一个多出来的方法是contains,这个多出来的方法也没什么用,因为它跟containsValue方法功能是一样的。

    此外HashMap是支持null键和null值的,而Hashtable在遇到null时,会抛出NullPointerException异常。这并不是因为HashTable有什么特殊的实现层面的原因导致不能支持null键和null值,这仅仅是因为HashMap在实现时对null做了特殊处理,将null的hashCode值定为了0,从而将其存放在哈希表的第0个bucket中。

 5.3、从算法层面上分析

    初始容量大小和每次扩充容量大小的不同。

 1 以下代码及注释来自java.util.HashTable
 2 // 哈希表默认初始大小为11
 3 public Hashtable() {
 4     this(11, 0.75f);
 5 }
 6 protected void rehash() {
 7     int oldCapacity = table.length;
 8     Entry<K,V>[] oldMap = table;
 9  
10     // 每次扩容为原来的2n+1
11     int newCapacity = (oldCapacity << 1) + 1;
12     // ...
13 }
14 以下代码及注释来自java.util.HashMap
15 // 哈希表默认初始大小为2^4=16
16 static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
17 void addEntry(int hash, K key, V value, int bucketIndex) {
18     // 每次扩充为原来的2n
19     if ((size >= threshold) && (null != table[bucketIndex])) {
20        resize(2 * table.length);
21 }

    可以看到Hashtable默认的初始大小为11,之后每次扩充为原来的2n+1。HashMap默认的初始化大小为16,之后每次扩充为原来的2倍。如果在创建时给定了初始化大小,那么Hashtable会直接使用你给定的大小,而HashMap会将其扩充为2的幂次方大小。也就是说Hashtable会尽量使用素数、奇数。而HashMap则总是使用2的幂作为哈希表的大小。我们知道当哈希表的大小为素数时,简单的取模哈希的结果会更加均匀,所以单从这一点上看,Hashtable的哈希表大小选择,似乎更高明些。但另一方面我们又知道,在取模计算时,如果模数是2的幂,那么我们可以直接使用位运算来得到结果,效率要大大高于做除法。所以从hash计算的效率上,又是HashMap更胜一筹。所以,事实就是HashMap为了加快hash的速度,将哈希表的大小固定为了2的幂。当然这引入了哈希分布不均匀的问题,所以HashMap为解决这问题,又对hash算法做了一些改动。具体我们来看看,在获取了key对象的hashCode之后,Hashtable和HashMap分别是怎样将它们hash到确定的哈希桶(Entry数组位置)中的。
     HashMap由于使用了2的幂次方,所以在取模运算时不需要做除法,只需要位的与运算就可以了。但是由于引入的hash冲突加剧问题,HashMap在调用了对象的hashCode方法之后,又做了一些位运算在打散数据。

 1 以下代码及注释来自java.util.Hashtable
 2  
 3 // hash 不能超过Integer.MAX_VALUE 所以要取其最小的31个bit
 4 int hash = hash(key);
 5 int index = (hash & 0x7FFFFFFF) % tab.length;
 6  
 7 // 直接计算key.hashCode()
 8 private int hash(Object k) {
 9     // hashSeed will be zero if alternative hashing is disabled.
10     return hashSeed ^ k.hashCode();
11 }
12 以下代码及注释来自java.util.HashMap
13 int hash = hash(key);
14 int i = indexFor(hash, table.length);
15  
16 // 在计算了key.hashCode()之后,做了一些位运算来减少哈希冲突
17 final int hash(Object k) {
18     int h = hashSeed;
19     if (0 != h && k instanceof String) {
20         return sun.misc.Hashing.stringHash32((String) k);
21     }
22 
23     h ^= k.hashCode();
24  
25     // This function ensures that hashCodes that differ only by
26     // constant multiples at each bit position have a bounded
27     // number of collisions (approximately 8 at default load factor).
28     h ^= (h >>> 20) ^ (h >>> 12);
29     return h ^ (h >>> 7) ^ (h >>> 4);
30 }
31  
32 // 取模不再需要做除法
33 static int indexFor(int h, int length) {
34     // assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
35     return h & (length-1);
36 }
View Code

     HashMap和HashTable在计算hash时都用到了一个叫hashSeed的变量。这是因为映射到同一个hash桶内的Entry对象,是以链表的形式存在的,而链表的查询效率比较低,所以HashMap/Hashtable的效率对哈希冲突非常敏感,所以可以额外开启一个可选hash(hashSeed),从而减少哈希冲突。事实上,这个优化在JDK 1.8中已经去掉了,因为JDK 1.8中,映射到同一个哈希桶(数组位置)的Entry对象,使用了红黑树来存储,从而大大加速了其查找效率。

 5.4、线程安全

    HashTable是同步的,HashMap不是,也就是说HashTable在多线程使用的情况下,不需要做额外的同步,而HashMap则不行。但是使用了synchronized描述符降低了效率。

 5.5、代码风格

    HashMap的代码要比Hashtable整洁很多。

 5.6、使用情况

   Hashtable已经被淘汰了,不要在代码中再使用它。简单来说就是,如果不需要线程安全,那么使用HashMap,如果需要线程安全,那么使用ConcurrentHashMap。

 5.7、持续优化

    虽然HashMap和Hashtable的公开接口应该不会改变,或者说改变不频繁。但每一版本的JDK,都会对HashMap和Hashtable的内部实现做优化,比如JDK 1.8的红黑树优化。所以,尽可能的使用新版本的JDK,除了那些炫酷的新功能,普通的API也会有性能上有提升。为什么HashTable已经淘汰了,还要优化它?因为有老的代码还在使用它,所以优化了它之后,这些老的代码也能获得性能提升。

六、总结

  本文中我们深入探讨了三种Map结构,对于其中的实现原理,初始化,增删改查,扩容,提升查找效率等等方面进行了分析和探讨,对我们以后的使用非常有帮助。

 参考文献: https://www.cnblogs.com/skywang12345/p/3310835.html

                   http://www.importnew.com/20386.html

            http://www.importnew.com/29832.html

            http://www.importnew.com/24822.html

posted @ 2018-10-31 17:18  精心出精品  阅读(1721)  评论(0编辑  收藏  举报