HashMap源码详解(基于jdk1.8.0_231)
1. HashMap源码简介
- HashMap数据结构本质上是散列表,jdk1.8前,利用链表处理哈希冲突,jdk1.8利用链表和红黑树来解决哈希冲突。具体来讲当链表的长度等于8时,链表就被树化为红黑树,总之jdk1.8前HashMap的数据结构为数组+链表(数组在HashMap中又称为buckets【桶】),从jdk1.8后HashMap的数据结构为数组+链表+红黑树;
- HashMap基本等价于HashTable,HashMap时线程不安全的,HashTable时线程安全的,多线程环境下,需要对HashMap外部加锁,或是使用线程安全的类代替,或 Map m = Collections.synchronizedMap(new HashMap(...));
- HashMap的容量(capacity)一定为2的幂次方;
- HashMap映入了:容量,threshold,loadfactor,三个参数,当其size >= threshold = capacity * loadfactor时就会扩容,扩容后哈希表的数组容量为原数组容量的两倍,其中loadfactor 默认值为0.75,这是时空复杂的一个平衡折中设计,若loadfactor太大,虽然省了空间,但是哈希碰撞可能会增多,这样get put 时间都是变长,若loadfacror太小,碰撞虽然小了,但太费空间;
- HashMap允许null作为key和value,HashTable不允许key或value为null;
- 若要在HashMap中存储大量的key-value对,预估设置容量,比不断扩容性能要好很多;
- HashMap 的迭代器也存在fast-fail机制,即一旦使用迭代器迭代HashMap实例,除了利用迭代器类的方法修改HashMap实例的结构外,其他方法修改HashMap结构,都会抛异常;
2. HashMap内存逻辑结构
3. HashMap UML简图
HashMap API概述
HashMap类中定义的字段
private static final long serialVersionUID = 362498820763181265L;
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
static final int MAXIMUM_CAPACITY = 1 << 30;
static final float DEFAULT_LOAD_FACTOR = 0.75f;
static final int TREEIFY_THRESHOLD = 8;
static final int UNTREEIFY_THRESHOLD = 6;
static final int MIN_TREEIFY_CAPACITY = 64;
transient Node<K,V>[] table;
transient Set<Map.Entry<K,V>> entrySet;
transient int size;
transient int modCount;
int threshold;
final float loadFactor;
构造函数
public HashMap(int initialCapacity, float loadFactor)
public HashMap(int initialCapacity)
public HashMap()
public HashMap(Map<? extends K, ? extends V> m)
Override或新增的方法
public int size()
public boolean isEmpty()
public V get(Object key)
public V put(K key, V value)
public boolean containsKey(Object key)
public void putAll(Map<? extends K, ? extends V> m)
public V remove(Object key)
public void clear()
public boolean containsValue(Object value)
----------------HashMap的三个视图------------
public Set<K> keySet()
public Collection<V> values()
public Set<Map.Entry<K,V>> entrySet()
----------------jdk新增的方法----------------
public V getOrDefault(Object key, V defaultValue)
public V putIfAbsent(K key, V value)
public boolean remove(Object key, Object value)
public boolean replace(K key, V oldValue, V newValue)
public V replace(K key, V value)
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction)
public V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction)
public V compute(K key,BiFunction<? super K, ? super V, ? extends V> remappingFunction)
public V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction)
public void forEach(BiConsumer<? super K, ? super V> action)
public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function)
----------------------------------------------
public Object clone()
HashMap类中的包权限方法
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict)
final Node<K,V> getNode(int hash, Object key)
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict)
final Node<K,V>[] resize()
final void treeifyBin(Node<K,V>[] tab, int hash)
final Node<K,V> removeNode(int hash, Object key, Object value,boolean matchValue, boolean movable)
final float loadFactor()
final int capacity()
Node<K,V> newNode(int hash, K key, V value, Node<K,V> next)
Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next)
TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next)
TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next)
void reinitialize()
void afterNodeAccess(Node<K,V> p)
void afterNodeInsertion(boolean evict)
void afterNodeRemoval(Node<K,V> p)
void internalWriteEntries(java.io.ObjectOutputStream s)
final TreeNode<K,V> root()
static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root)
final TreeNode<K,V> find(int h, Object k, Class<?> kc)
final TreeNode<K,V> getTreeNode(int h, Object k)
static int tieBreakOrder(Object a, Object b)
final void treeify(Node<K,V>[] tab)
final Node<K,V> untreeify(HashMap<K,V> map)
final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab, int h, K k, V v)
final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab, boolean movable)
final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit)
static <K,V> TreeNode<K,V> rotateLeft
static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root, TreeNode<K,V> p)
static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root, TreeNode<K,V> x)
static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root, TreeNode<K,V> x)
static <K,V> boolean checkInvariants(TreeNode<K,V> t)
类中private方法
private void writeObject(java.io.ObjectOutputStream s)
private void readObject(java.io.ObjectInputStream s)
4. HashMap源码解析
package java.util;
import java.io.IOException;
import java.io.InvalidObjectException;
import java.io.Serializable;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import sun.misc.SharedSecrets;
public class HashMap<K,V> extends AbstractMap<K,V>
implements Map<K,V>, Cloneable, Serializable {
private static final long serialVersionUID = 362498820763181265L;
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; //默认的初始化容量16
static final int MAXIMUM_CAPACITY = 1 << 30; //最大容量
static final float DEFAULT_LOAD_FACTOR = 0.75f; //默认的加载因子
static final int TREEIFY_THRESHOLD = 8; //链表长度大于等于8时,链表树化
static final int UNTREEIFY_THRESHOLD = 6;
static final int MIN_TREEIFY_CAPACITY = 64;
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
/* ---------------- Static utilities -------------- */
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
static Class<?> comparableClassFor(Object x) {
if (x instanceof Comparable) {
Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
if ((c = x.getClass()) == String.class) // bypass checks
return c;
if ((ts = c.getGenericInterfaces()) != null) {
for (int i = 0; i < ts.length; ++i) {
if (((t = ts[i]) instanceof ParameterizedType) &&
((p = (ParameterizedType)t).getRawType() ==
Comparable.class) &&
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c
return c;
}
}
}
return null;
}
@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
static int compareComparables(Class<?> kc, Object k, Object x) {
return (x == null || x.getClass() != kc ? 0 :
((Comparable)k).compareTo(x));
}
//将HashMap的容量设置为2的幂次方
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
/* ---------------- Fields -------------- */
transient Node<K,V>[] table;
transient Set<Map.Entry<K,V>> entrySet;
transient int size;
transient int modCount;
int threshold;
final float loadFactor;
/* ---------------- Public operations -------------- */
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity);
}
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
int s = m.size();
if (s > 0) {
if (table == null) { // pre-size
float ft = ((float)s / loadFactor) + 1.0F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY);
if (t > threshold)
threshold = tableSizeFor(t);
}
else if (s > threshold)
resize();
for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict);
}
}
}
public int size() {
return size;
}
public boolean isEmpty() {
return size == 0;
}
public V get(Object key) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) {
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
public boolean containsKey(Object key) {
return getNode(hash(key), key) != null;
}
//添加k-v对,如果hash一样key不同,执行冲突处理,用链表或红黑树存储冲突的节点,
// 如果hash key 一样,value不同,替换原value
//如果替换了原的value,返回原value,若是新加入的节点返回null
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
/**
* Implements Map.put and related methods.
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/
//HashMap加入元素的核心方法
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0) //如果table为空,或者table.length等于0,需要扩容啊,不为空,直接往里面添加
n = (tab = resize()).length;
if ((p = tab[i = (n - 1) & hash]) == null) // (n-1) & hash得到索引,该位置刚好为空,没有hash碰撞,放入。
tab[i] = newNode(hash, key, value, null);
else { //发生碰撞
Node<K,V> e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k)))) //如果hash key 都相等,先用e记住p
e = p;
//就是hash碰撞
else if (p instanceof TreeNode) //如果p是TreeNode,说明当前已采用红黑树来处理碰撞
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
//采用链表来处理碰撞,当链表长度大于8时,将链表转成红黑树
else {
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash && //key hash 相等
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value; //key hash 都相等,用新value更新老value
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold) //如果++size > threshould 扩容
resize();
afterNodeInsertion(evict);
return null;
}
//resize 函数主要功能 为: 扩容 + 迁移,扩容是将HashMap容量扩大两倍,迁移是将老HashMap中的所有元素迁移到新扩容的HashMap中
// 迁移保持原来的元素的相对顺序
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
//情况1: oldCap > 0
if (oldCap > 0) {
//1.1: oldCap > (1<<30) 时,没法在扩大两倍,直接将threshold = Integer.MAX_VALUE;
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
//1.2 : 可以正常扩容, newCap = oldCap<<1, newThr = oldThr << 1
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
//
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
//将链表转成红黑树,若哈希表长度 < 64 先扩容,为什么要先扩容呢?是因为类库设计者目前碰撞多了,可能是由于哈希表长度短了,没准扩容后,碰撞就会少点
// 当然扩容后,仍可能继续碰撞哦
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null) { //扩容后还是原索引碰撞,链表转成红黑树
TreeNode<K,V> hd = null, tl = null;
do {
TreeNode<K,V> p = replacementTreeNode(e, null);
if (tl == null)
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
hd.treeify(tab);
}
}
public void putAll(Map<? extends K, ? extends V> m) {
putMapEntries(m, true);
}
public V remove(Object key) {
Node<K,V> e;
return (e = removeNode(hash(key), key, null, false, true)) == null ?
null : e.value;
}
final Node<K,V> removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
Node<K,V>[] tab; Node<K,V> p; int n, index;
if ((tab = table) != null && (n = tab.length) > 0 &&
(p = tab[index = (n - 1) & hash]) != null) {
Node<K,V> node = null, e; K k; V v;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
node = p;
else if ((e = p.next) != null) {
if (p instanceof TreeNode)
node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
else {
do {
if (e.hash == hash &&
((k = e.key) == key ||
(key != null && key.equals(k)))) {
node = e;
break;
}
p = e;
} while ((e = e.next) != null);
}
}
if (node != null && (!matchValue || (v = node.value) == value ||
(value != null && value.equals(v)))) {
if (node instanceof TreeNode)
((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
else if (node == p)
tab[index] = node.next;
else
p.next = node.next;
++modCount;
--size;
afterNodeRemoval(node);
return node;
}
}
return null;
}
public void clear() {
Node<K,V>[] tab;
modCount++;
if ((tab = table) != null && size > 0) {
size = 0;
for (int i = 0; i < tab.length; ++i)
tab[i] = null;
}
}
public boolean containsValue(Object value) {
Node<K,V>[] tab; V v;
if ((tab = table) != null && size > 0) {
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next) {
if ((v = e.value) == value ||
(value != null && value.equals(v)))
return true;
}
}
}
return false;
}
public Set<K> keySet() {
Set<K> ks = keySet;
if (ks == null) {
ks = new KeySet();
keySet = ks;
}
return ks;
}
final class KeySet extends AbstractSet<K> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<K> iterator() { return new KeyIterator(); }
public final boolean contains(Object o) { return containsKey(o); }
public final boolean remove(Object key) {
return removeNode(hash(key), key, null, false, true) != null;
}
public final Spliterator<K> spliterator() {
return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super K> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.key);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
public Collection<V> values() {
Collection<V> vs = values;
if (vs == null) {
vs = new Values();
values = vs;
}
return vs;
}
final class Values extends AbstractCollection<V> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<V> iterator() { return new ValueIterator(); }
public final boolean contains(Object o) { return containsValue(o); }
public final Spliterator<V> spliterator() {
return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super V> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.value);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
public Set<Map.Entry<K,V>> entrySet() {
Set<Map.Entry<K,V>> es;
return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
}
final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<Map.Entry<K,V>> iterator() {
return new EntryIterator();
}
public final boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Node<K,V> candidate = getNode(hash(key), key);
return candidate != null && candidate.equals(e);
}
public final boolean remove(Object o) {
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Object value = e.getValue();
return removeNode(hash(key), key, value, true, true) != null;
}
return false;
}
public final Spliterator<Map.Entry<K,V>> spliterator() {
return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
// Overrides of JDK8 Map extension methods
@Override
public V getOrDefault(Object key, V defaultValue) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
}
@Override
public V putIfAbsent(K key, V value) {
return putVal(hash(key), key, value, true, true);
}
@Override
public boolean remove(Object key, Object value) {
return removeNode(hash(key), key, value, true, true) != null;
}
@Override
public boolean replace(K key, V oldValue, V newValue) {
Node<K,V> e; V v;
if ((e = getNode(hash(key), key)) != null &&
((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
e.value = newValue;
afterNodeAccess(e);
return true;
}
return false;
}
@Override
public V replace(K key, V value) {
Node<K,V> e;
if ((e = getNode(hash(key), key)) != null) {
V oldValue = e.value;
e.value = value;
afterNodeAccess(e);
return oldValue;
}
return null;
}
@Override
public V computeIfAbsent(K key,
Function<? super K, ? extends V> mappingFunction) {
if (mappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K,V>[] tab; Node<K,V> first; int n, i;
int binCount = 0;
TreeNode<K,V> t = null;
Node<K,V> old = null;
if (size > threshold || (tab = table) == null ||
(n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null) {
if (first instanceof TreeNode)
old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
else {
Node<K,V> e = first; K k;
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
old = e;
break;
}
++binCount;
} while ((e = e.next) != null);
}
V oldValue;
if (old != null && (oldValue = old.value) != null) {
afterNodeAccess(old);
return oldValue;
}
}
V v = mappingFunction.apply(key);
if (v == null) {
return null;
} else if (old != null) {
old.value = v;
afterNodeAccess(old);
return v;
}
else if (t != null)
t.putTreeVal(this, tab, hash, key, v);
else {
tab[i] = newNode(hash, key, v, first);
if (binCount >= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
return v;
}
public V computeIfPresent(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
Node<K,V> e; V oldValue;
int hash = hash(key);
if ((e = getNode(hash, key)) != null &&
(oldValue = e.value) != null) {
V v = remappingFunction.apply(key, oldValue);
if (v != null) {
e.value = v;
afterNodeAccess(e);
return v;
}
else
removeNode(hash, key, null, false, true);
}
return null;
}
@Override
public V compute(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K,V>[] tab; Node<K,V> first; int n, i;
int binCount = 0;
TreeNode<K,V> t = null;
Node<K,V> old = null;
if (size > threshold || (tab = table) == null ||
(n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null) {
if (first instanceof TreeNode)
old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
else {
Node<K,V> e = first; K k;
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
old = e;
break;
}
++binCount;
} while ((e = e.next) != null);
}
}
V oldValue = (old == null) ? null : old.value;
V v = remappingFunction.apply(key, oldValue);
if (old != null) {
if (v != null) {
old.value = v;
afterNodeAccess(old);
}
else
removeNode(hash, key, null, false, true);
}
else if (v != null) {
if (t != null)
t.putTreeVal(this, tab, hash, key, v);
else {
tab[i] = newNode(hash, key, v, first);
if (binCount >= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
}
return v;
}
@Override
public V merge(K key, V value,
BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
if (value == null)
throw new NullPointerException();
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K,V>[] tab; Node<K,V> first; int n, i;
int binCount = 0;
TreeNode<K,V> t = null;
Node<K,V> old = null;
if (size > threshold || (tab = table) == null ||
(n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null) {
if (first instanceof TreeNode)
old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
else {
Node<K,V> e = first; K k;
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
old = e;
break;
}
++binCount;
} while ((e = e.next) != null);
}
}
if (old != null) {
V v;
if (old.value != null)
v = remappingFunction.apply(old.value, value);
else
v = value;
if (v != null) {
old.value = v;
afterNodeAccess(old);
}
else
removeNode(hash, key, null, false, true);
return v;
}
if (value != null) {
if (t != null)
t.putTreeVal(this, tab, hash, key, value);
else {
tab[i] = newNode(hash, key, value, first);
if (binCount >= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
}
return value;
}
@Override
public void forEach(BiConsumer<? super K, ? super V> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.key, e.value);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
@Override
public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
Node<K,V>[] tab;
if (function == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next) {
e.value = function.apply(e.key, e.value);
}
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
/* ------------------------------------------------------------ */
// Cloning and serialization
@SuppressWarnings("unchecked")
@Override
public Object clone() {
HashMap<K,V> result;
try {
result = (HashMap<K,V>)super.clone();
} catch (CloneNotSupportedException e) {
// this shouldn't happen, since we are Cloneable
throw new InternalError(e);
}
result.reinitialize();
result.putMapEntries(this, false);
return result;
}
// These methods are also used when serializing HashSets
final float loadFactor() { return loadFactor; }
final int capacity() {
return (table != null) ? table.length :
(threshold > 0) ? threshold :
DEFAULT_INITIAL_CAPACITY;
}
private void writeObject(java.io.ObjectOutputStream s)
throws IOException {
int buckets = capacity();
// Write out the threshold, loadfactor, and any hidden stuff
s.defaultWriteObject();
s.writeInt(buckets);
s.writeInt(size);
internalWriteEntries(s);
}
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException {
// Read in the threshold (ignored), loadfactor, and any hidden stuff
s.defaultReadObject();
reinitialize();
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new InvalidObjectException("Illegal load factor: " +
loadFactor);
s.readInt(); // Read and ignore number of buckets
int mappings = s.readInt(); // Read number of mappings (size)
if (mappings < 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings);
else if (mappings > 0) { // (if zero, use defaults)
// Size the table using given load factor only if within
// range of 0.25...4.0
float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
float fc = (float)mappings / lf + 1.0f;
int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
DEFAULT_INITIAL_CAPACITY :
(fc >= MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY :
tableSizeFor((int)fc));
float ft = (float)cap * lf;
threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
(int)ft : Integer.MAX_VALUE);
// Check Map.Entry[].class since it's the nearest public type to
// what we're actually creating.
SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap);
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
table = tab;
// Read the keys and values, and put the mappings in the HashMap
for (int i = 0; i < mappings; i++) {
@SuppressWarnings("unchecked")
K key = (K) s.readObject();
@SuppressWarnings("unchecked")
V value = (V) s.readObject();
putVal(hash(key), key, value, false, false);
}
}
}
/* ------------------------------------------------------------ */
// iterators
abstract class HashIterator {
Node<K,V> next; // next entry to return
Node<K,V> current; // current entry
int expectedModCount; // for fast-fail
int index; // current slot
HashIterator() {
expectedModCount = modCount;
Node<K,V>[] t = table;
current = next = null;
index = 0;
if (t != null && size > 0) { // advance to first entry
do {} while (index < t.length && (next = t[index++]) == null);
}
}
public final boolean hasNext() {
return next != null;
}
final Node<K,V> nextNode() {
Node<K,V>[] t;
Node<K,V> e = next;
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
if (e == null)
throw new NoSuchElementException();
if ((next = (current = e).next) == null && (t = table) != null) {
do {} while (index < t.length && (next = t[index++]) == null);
}
return e;
}
public final void remove() {
Node<K,V> p = current;
if (p == null)
throw new IllegalStateException();
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
current = null;
K key = p.key;
removeNode(hash(key), key, null, false, false);
expectedModCount = modCount;
}
}
final class KeyIterator extends HashIterator
implements Iterator<K> {
public final K next() { return nextNode().key; }
}
final class ValueIterator extends HashIterator
implements Iterator<V> {
public final V next() { return nextNode().value; }
}
final class EntryIterator extends HashIterator
implements Iterator<Map.Entry<K,V>> {
public final Map.Entry<K,V> next() { return nextNode(); }
}
/* ------------------------------------------------------------ */
// spliterators
static class HashMapSpliterator<K,V> {
final HashMap<K,V> map;
Node<K,V> current; // current node
int index; // current index, modified on advance/split
int fence; // one past last index
int est; // size estimate
int expectedModCount; // for comodification checks
HashMapSpliterator(HashMap<K,V> m, int origin,
int fence, int est,
int expectedModCount) {
this.map = m;
this.index = origin;
this.fence = fence;
this.est = est;
this.expectedModCount = expectedModCount;
}
final int getFence() { // initialize fence and size on first use
int hi;
if ((hi = fence) < 0) {
HashMap<K,V> m = map;
est = m.size;
expectedModCount = m.modCount;
Node<K,V>[] tab = m.table;
hi = fence = (tab == null) ? 0 : tab.length;
}
return hi;
}
public final long estimateSize() {
getFence(); // force init
return (long) est;
}
}
static final class KeySpliterator<K,V>
extends HashMapSpliterator<K,V>
implements Spliterator<K> {
KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,
int expectedModCount) {
super(m, origin, fence, est, expectedModCount);
}
public KeySpliterator<K,V> trySplit() {
int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
return (lo >= mid || current != null) ? null :
new KeySpliterator<>(map, lo, index = mid, est >>>= 1,
expectedModCount);
}
public void forEachRemaining(Consumer<? super K> action) {
int i, hi, mc;
if (action == null)
throw new NullPointerException();
HashMap<K,V> m = map;
Node<K,V>[] tab = m.table;
if ((hi = fence) < 0) {
mc = expectedModCount = m.modCount;
hi = fence = (tab == null) ? 0 : tab.length;
}
else
mc = expectedModCount;
if (tab != null && tab.length >= hi &&
(i = index) >= 0 && (i < (index = hi) || current != null)) {
Node<K,V> p = current;
current = null;
do {
if (p == null)
p = tab[i++];
else {
action.accept(p.key);
p = p.next;
}
} while (p != null || i < hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer<? super K> action) {
int hi;
if (action == null)
throw new NullPointerException();
Node<K,V>[] tab = map.table;
if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
while (current != null || index < hi) {
if (current == null)
current = tab[index++];
else {
K k = current.key;
current = current.next;
action.accept(k);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics() {
return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
Spliterator.DISTINCT;
}
}
static final class ValueSpliterator<K,V>
extends HashMapSpliterator<K,V>
implements Spliterator<V> {
ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,
int expectedModCount) {
super(m, origin, fence, est, expectedModCount);
}
public ValueSpliterator<K,V> trySplit() {
int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
return (lo >= mid || current != null) ? null :
new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,
expectedModCount);
}
public void forEachRemaining(Consumer<? super V> action) {
int i, hi, mc;
if (action == null)
throw new NullPointerException();
HashMap<K,V> m = map;
Node<K,V>[] tab = m.table;
if ((hi = fence) < 0) {
mc = expectedModCount = m.modCount;
hi = fence = (tab == null) ? 0 : tab.length;
}
else
mc = expectedModCount;
if (tab != null && tab.length >= hi &&
(i = index) >= 0 && (i < (index = hi) || current != null)) {
Node<K,V> p = current;
current = null;
do {
if (p == null)
p = tab[i++];
else {
action.accept(p.value);
p = p.next;
}
} while (p != null || i < hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer<? super V> action) {
int hi;
if (action == null)
throw new NullPointerException();
Node<K,V>[] tab = map.table;
if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
while (current != null || index < hi) {
if (current == null)
current = tab[index++];
else {
V v = current.value;
current = current.next;
action.accept(v);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics() {
return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
}
}
static final class EntrySpliterator<K,V>
extends HashMapSpliterator<K,V>
implements Spliterator<Map.Entry<K,V>> {
EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,
int expectedModCount) {
super(m, origin, fence, est, expectedModCount);
}
public EntrySpliterator<K,V> trySplit() {
int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
return (lo >= mid || current != null) ? null :
new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,
expectedModCount);
}
public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {
int i, hi, mc;
if (action == null)
throw new NullPointerException();
HashMap<K,V> m = map;
Node<K,V>[] tab = m.table;
if ((hi = fence) < 0) {
mc = expectedModCount = m.modCount;
hi = fence = (tab == null) ? 0 : tab.length;
}
else
mc = expectedModCount;
if (tab != null && tab.length >= hi &&
(i = index) >= 0 && (i < (index = hi) || current != null)) {
Node<K,V> p = current;
current = null;
do {
if (p == null)
p = tab[i++];
else {
action.accept(p);
p = p.next;
}
} while (p != null || i < hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {
int hi;
if (action == null)
throw new NullPointerException();
Node<K,V>[] tab = map.table;
if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
while (current != null || index < hi) {
if (current == null)
current = tab[index++];
else {
Node<K,V> e = current;
current = current.next;
action.accept(e);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics() {
return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
Spliterator.DISTINCT;
}
}
/* ------------------------------------------------------------ */
// LinkedHashMap support
// Create a regular (non-tree) node
Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
return new Node<>(hash, key, value, next);
}
// For conversion from TreeNodes to plain nodes
Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {
return new Node<>(p.hash, p.key, p.value, next);
}
// Create a tree bin node
TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {
return new TreeNode<>(hash, key, value, next);
}
// For treeifyBin
TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {
return new TreeNode<>(p.hash, p.key, p.value, next);
}
/**
* Reset to initial default state. Called by clone and readObject.
*/
void reinitialize() {
table = null;
entrySet = null;
keySet = null;
values = null;
modCount = 0;
threshold = 0;
size = 0;
}
// Callbacks to allow LinkedHashMap post-actions
void afterNodeAccess(Node<K,V> p) { }
void afterNodeInsertion(boolean evict) { }
void afterNodeRemoval(Node<K,V> p) { }
// Called only from writeObject, to ensure compatible ordering.
void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
Node<K,V>[] tab;
if (size > 0 && (tab = table) != null) {
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next) {
s.writeObject(e.key);
s.writeObject(e.value);
}
}
}
}
/* ------------------------------------------------------------ */
// Tree bins
/**
* Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
* extends Node) so can be used as extension of either regular or
* linked node.
*/
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
TreeNode<K,V> parent; // red-black tree links
TreeNode<K,V> left;
TreeNode<K,V> right;
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next) {
super(hash, key, val, next);
}
/**
* Returns root of tree containing this node.
*/
final TreeNode<K,V> root() {
for (TreeNode<K,V> r = this, p;;) {
if ((p = r.parent) == null)
return r;
r = p;
}
}
/**
* Ensures that the given root is the first node of its bin.
*/
static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {
int n;
if (root != null && tab != null && (n = tab.length) > 0) {
int index = (n - 1) & root.hash;
TreeNode<K,V> first = (TreeNode<K,V>)tab[index];
if (root != first) {
Node<K,V> rn;
tab[index] = root;
TreeNode<K,V> rp = root.prev;
if ((rn = root.next) != null)
((TreeNode<K,V>)rn).prev = rp;
if (rp != null)
rp.next = rn;
if (first != null)
first.prev = root;
root.next = first;
root.prev = null;
}
assert checkInvariants(root);
}
}
/**
* Finds the node starting at root p with the given hash and key.
* The kc argument caches comparableClassFor(key) upon first use
* comparing keys.
*/
final TreeNode<K,V> find(int h, Object k, Class<?> kc) {
TreeNode<K,V> p = this;
do {
int ph, dir; K pk;
TreeNode<K,V> pl = p.left, pr = p.right, q;
if ((ph = p.hash) > h)
p = pl;
else if (ph < h)
p = pr;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if (pl == null)
p = pr;
else if (pr == null)
p = pl;
else if ((kc != null ||
(kc = comparableClassFor(k)) != null) &&
(dir = compareComparables(kc, k, pk)) != 0)
p = (dir < 0) ? pl : pr;
else if ((q = pr.find(h, k, kc)) != null)
return q;
else
p = pl;
} while (p != null);
return null;
}
/**
* Calls find for root node.
*/
final TreeNode<K,V> getTreeNode(int h, Object k) {
return ((parent != null) ? root() : this).find(h, k, null);
}
/**
* Tie-breaking utility for ordering insertions when equal
* hashCodes and non-comparable. We don't require a total
* order, just a consistent insertion rule to maintain
* equivalence across rebalancings. Tie-breaking further than
* necessary simplifies testing a bit.
*/
static int tieBreakOrder(Object a, Object b) {
int d;
if (a == null || b == null ||
(d = a.getClass().getName().
compareTo(b.getClass().getName())) == 0)
d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
-1 : 1);
return d;
}
/**
* Forms tree of the nodes linked from this node.
*/
final void treeify(Node<K,V>[] tab) {
TreeNode<K,V> root = null;
for (TreeNode<K,V> x = this, next; x != null; x = next) {
next = (TreeNode<K,V>)x.next;
x.left = x.right = null;
if (root == null) {
x.parent = null;
x.red = false;
root = x;
}
else {
K k = x.key;
int h = x.hash;
Class<?> kc = null;
for (TreeNode<K,V> p = root;;) {
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);
TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
root = balanceInsertion(root, x);
break;
}
}
}
}
moveRootToFront(tab, root);
}
/**
* Returns a list of non-TreeNodes replacing those linked from
* this node.
*/
final Node<K,V> untreeify(HashMap<K,V> map) {
Node<K,V> hd = null, tl = null;
for (Node<K,V> q = this; q != null; q = q.next) {
Node<K,V> p = map.replacementNode(q, null);
if (tl == null)
hd = p;
else
tl.next = p;
tl = p;
}
return hd;
}
/**
* Tree version of putVal.
*/
final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
int h, K k, V v) {
Class<?> kc = null;
boolean searched = false;
TreeNode<K,V> root = (parent != null) ? root() : this;
for (TreeNode<K,V> p = root;;) {
int dir, ph; K pk;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if (!searched) {
TreeNode<K,V> q, ch;
searched = true;
if (((ch = p.left) != null &&
(q = ch.find(h, k, kc)) != null) ||
((ch = p.right) != null &&
(q = ch.find(h, k, kc)) != null))
return q;
}
dir = tieBreakOrder(k, pk);
}
TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
Node<K,V> xpn = xp.next;
TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
if (dir <= 0)
xp.left = x;
else
xp.right = x;
xp.next = x;
x.parent = x.prev = xp;
if (xpn != null)
((TreeNode<K,V>)xpn).prev = x;
moveRootToFront(tab, balanceInsertion(root, x));
return null;
}
}
}
/**
* Removes the given node, that must be present before this call.
* This is messier than typical red-black deletion code because we
* cannot swap the contents of an interior node with a leaf
* successor that is pinned by "next" pointers that are accessible
* independently during traversal. So instead we swap the tree
* linkages. If the current tree appears to have too few nodes,
* the bin is converted back to a plain bin. (The test triggers
* somewhere between 2 and 6 nodes, depending on tree structure).
*/
final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,
boolean movable) {
int n;
if (tab == null || (n = tab.length) == 0)
return;
int index = (n - 1) & hash;
TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;
TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;
if (pred == null)
tab[index] = first = succ;
else
pred.next = succ;
if (succ != null)
succ.prev = pred;
if (first == null)
return;
if (root.parent != null)
root = root.root();
if (root == null
|| (movable
&& (root.right == null
|| (rl = root.left) == null
|| rl.left == null))) {
tab[index] = first.untreeify(map); // too small
return;
}
TreeNode<K,V> p = this, pl = left, pr = right, replacement;
if (pl != null && pr != null) {
TreeNode<K,V> s = pr, sl;
while ((sl = s.left) != null) // find successor
s = sl;
boolean c = s.red; s.red = p.red; p.red = c; // swap colors
TreeNode<K,V> sr = s.right;
TreeNode<K,V> pp = p.parent;
if (s == pr) { // p was s's direct parent
p.parent = s;
s.right = p;
}
else {
TreeNode<K,V> sp = s.parent;
if ((p.parent = sp) != null) {
if (s == sp.left)
sp.left = p;
else
sp.right = p;
}
if ((s.right = pr) != null)
pr.parent = s;
}
p.left = null;
if ((p.right = sr) != null)
sr.parent = p;
if ((s.left = pl) != null)
pl.parent = s;
if ((s.parent = pp) == null)
root = s;
else if (p == pp.left)
pp.left = s;
else
pp.right = s;
if (sr != null)
replacement = sr;
else
replacement = p;
}
else if (pl != null)
replacement = pl;
else if (pr != null)
replacement = pr;
else
replacement = p;
if (replacement != p) {
TreeNode<K,V> pp = replacement.parent = p.parent;
if (pp == null)
root = replacement;
else if (p == pp.left)
pp.left = replacement;
else
pp.right = replacement;
p.left = p.right = p.parent = null;
}
TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);
if (replacement == p) { // detach
TreeNode<K,V> pp = p.parent;
p.parent = null;
if (pp != null) {
if (p == pp.left)
pp.left = null;
else if (p == pp.right)
pp.right = null;
}
}
if (movable)
moveRootToFront(tab, r);
}
/**
* Splits nodes in a tree bin into lower and upper tree bins,
* or untreeifies if now too small. Called only from resize;
* see above discussion about split bits and indices.
*
* @param map the map
* @param tab the table for recording bin heads
* @param index the index of the table being split
* @param bit the bit of hash to split on
*/
final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
TreeNode<K,V> b = this;
// Relink into lo and hi lists, preserving order
TreeNode<K,V> loHead = null, loTail = null;
TreeNode<K,V> hiHead = null, hiTail = null;
int lc = 0, hc = 0;
for (TreeNode<K,V> e = b, next; e != null; e = next) {
next = (TreeNode<K,V>)e.next;
e.next = null;
if ((e.hash & bit) == 0) {
if ((e.prev = loTail) == null)
loHead = e;
else
loTail.next = e;
loTail = e;
++lc;
}
else {
if ((e.prev = hiTail) == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
++hc;
}
}
if (loHead != null) {
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
else {
tab[index] = loHead;
if (hiHead != null) // (else is already treeified)
loHead.treeify(tab);
}
}
if (hiHead != null) {
if (hc <= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
else {
tab[index + bit] = hiHead;
if (loHead != null)
hiHead.treeify(tab);
}
}
}
/* ------------------------------------------------------------ */
// Red-black tree methods, all adapted from CLR
static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,
TreeNode<K,V> p) {
TreeNode<K,V> r, pp, rl;
if (p != null && (r = p.right) != null) {
if ((rl = p.right = r.left) != null)
rl.parent = p;
if ((pp = r.parent = p.parent) == null)
(root = r).red = false;
else if (pp.left == p)
pp.left = r;
else
pp.right = r;
r.left = p;
p.parent = r;
}
return root;
}
static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,
TreeNode<K,V> p) {
TreeNode<K,V> l, pp, lr;
if (p != null && (l = p.left) != null) {
if ((lr = p.left = l.right) != null)
lr.parent = p;
if ((pp = l.parent = p.parent) == null)
(root = l).red = false;
else if (pp.right == p)
pp.right = l;
else
pp.left = l;
l.right = p;
p.parent = l;
}
return root;
}
static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
TreeNode<K,V> x) {
x.red = true;
for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if (!xp.red || (xpp = xp.parent) == null)
return root;
if (xp == (xppl = xpp.left)) {
if ((xppr = xpp.right) != null && xppr.red) {
xppr.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.right) {
root = rotateLeft(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateRight(root, xpp);
}
}
}
}
else {
if (xppl != null && xppl.red) {
xppl.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.left) {
root = rotateRight(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateLeft(root, xpp);
}
}
}
}
}
}
static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,
TreeNode<K,V> x) {
for (TreeNode<K,V> xp, xpl, xpr;;) {
if (x == null || x == root)
return root;
else if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if (x.red) {
x.red = false;
return root;
}
else if ((xpl = xp.left) == x) {
if ((xpr = xp.right) != null && xpr.red) {
xpr.red = false;
xp.red = true;
root = rotateLeft(root, xp);
xpr = (xp = x.parent) == null ? null : xp.right;
}
if (xpr == null)
x = xp;
else {
TreeNode<K,V> sl = xpr.left, sr = xpr.right;
if ((sr == null || !sr.red) &&
(sl == null || !sl.red)) {
xpr.red = true;
x = xp;
}
else {
if (sr == null || !sr.red) {
if (sl != null)
sl.red = false;
xpr.red = true;
root = rotateRight(root, xpr);
xpr = (xp = x.parent) == null ?
null : xp.right;
}
if (xpr != null) {
xpr.red = (xp == null) ? false : xp.red;
if ((sr = xpr.right) != null)
sr.red = false;
}
if (xp != null) {
xp.red = false;
root = rotateLeft(root, xp);
}
x = root;
}
}
}
else { // symmetric
if (xpl != null && xpl.red) {
xpl.red = false;
xp.red = true;
root = rotateRight(root, xp);
xpl = (xp = x.parent) == null ? null : xp.left;
}
if (xpl == null)
x = xp;
else {
TreeNode<K,V> sl = xpl.left, sr = xpl.right;
if ((sl == null || !sl.red) &&
(sr == null || !sr.red)) {
xpl.red = true;
x = xp;
}
else {
if (sl == null || !sl.red) {
if (sr != null)
sr.red = false;
xpl.red = true;
root = rotateLeft(root, xpl);
xpl = (xp = x.parent) == null ?
null : xp.left;
}
if (xpl != null) {
xpl.red = (xp == null) ? false : xp.red;
if ((sl = xpl.left) != null)
sl.red = false;
}
if (xp != null) {
xp.red = false;
root = rotateRight(root, xp);
}
x = root;
}
}
}
}
}
/**
* Recursive invariant check
*/
static <K,V> boolean checkInvariants(TreeNode<K,V> t) {
TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,
tb = t.prev, tn = (TreeNode<K,V>)t.next;
if (tb != null && tb.next != t)
return false;
if (tn != null && tn.prev != t)
return false;
if (tp != null && t != tp.left && t != tp.right)
return false;
if (tl != null && (tl.parent != t || tl.hash > t.hash))
return false;
if (tr != null && (tr.parent != t || tr.hash < t.hash))
return false;
if (t.red && tl != null && tl.red && tr != null && tr.red)
return false;
if (tl != null && !checkInvariants(tl))
return false;
if (tr != null && !checkInvariants(tr))
return false;
return true;
}
}
}
5. HashMap代码鉴赏
由于HashMap的某些函数写得很优雅,设计令人眼前一亮,值得我们仔细品味,做些tricks积累。
- static final int tableSizeFor(int cap)
此函数主要用于帮助把初始化的HashMap容量修正为2的幂次方。比如我们传入12,实际容量被修正为16,传入1023修正为1024。
此函数精妙之处是利用二进制的右移操作,O(1)时空复杂度。启示我们2的幂次操作都可考虑用二进制的操作来处理。
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
为了简单考虑(不考虑cap <= 0),不妨设 n= cap -1 = 0...01xxxxxxxxxxx.[整型的二进制原码]
n = 0...01xxxxxxxxxxx;
n |= n>>>1; => n = 0...011xxxxxxxxxx; //相当于n的最1位往后移动1位,这样高位的前2位置一定为1
n |= n>>>2; => n = 0...01111xxxxxxxx; //相当于n的高2位往后移动2位,这样高位的前4位置一定为1;
n |= n>>>4; => n = 0...011111111xxxx; //相当于n的高4位往后移动4位,这样高位的前8位置一定为1;
n |= n>>>8; => n = 0...0111111111111; //相当于n的高8位往后移动8位,这样高位的前16位置一定为1;
n |= n>>>16; => n = 0...0111111111111; //相当于n的高16位往后移动16位,这样一定可以保证整型的31位置可以全置为1;
总之,经过这一系列操作,一定可以使得自最高位1往后全是1,右移1 2 4 8 16是由于java的整型存储上32bit,(1位符号位+31为数字位),如果java的整型为64位的话,就会出现移动 1 2 4 8 16 32.
下面解释为什么要使得 n = cap -1; 和 返回 n+1
若不做 n = cap -1,直接 令 n = cap ; 在cap在不为2的幂次方时,完全正确,当cap 等于2的幂次方时,会出现返回比cap高一阶的cap,如我们给cap=8 返回16 ,给cap =16 返回 32.
返回n+1的原因是:由上知,经过这一系列操作,一定可以使自最高位1往后全是1,即为 $2^0 + 2^1 + 2^2 + ... + 2^x $ ,加1后一定为2的幂次方了。
6. HashMap关键技术分析
哈希和索引
哈希表设计的关键技术就是要减少哈希碰撞,故哈希函数的设计就至关重要,jdk通过hash函数和异或操作确定了元素在table中的索引位置。我们先来介绍jdk1.8.0_231中的hash函数的设计思路,以及浅谈为何要这样设计。
- static final int hash(Object key)源码
//通过hash先得到hash值
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
// putVal函数的代码片段,通过 (n-1) & hash得到存放元素的索引位置 其等价于 hash % n,其中n表示哈希表的长度(table.length)
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
为什么要这样设计?为什么不直接返回key.hashCode()?
考虑如下情况,当我们的哈希表的长度较小(n-1较小)时,但是我们hash值都比较大(表现在二进制源码上为低位相同,高位差距较大)
如果我们直接返回这样的hash,很容易碰撞,因为真正与n-1做异或操作的都是hash值的低位,在低位相同情况下,出现碰撞,通过低16位与高16异或操作(>>>是高位补0,让低16位同时保存了低位和高位的bit信息,原来低位相同,高位相异的hash会碰撞,但取异或就会区别开,减少了碰撞的可能.
- 可证,在n为2的幂次方时,必有 \((n - 1) & hash = hash & (n - 1) = hash % n\),其中 \(n \in [16,1<<30]\) 即 \(hash \in\)[0,Integer.MAX_VALUE]
如果hash < 0 ,这个等式是不成立的!\(hash % (n-1)\),当hash<0,java也将hash当正数出处理了,因为n最大值为1<<30,符号位永远为0,按位与,符号位结果一定还是0。
\(不妨令 n = 2^k, hash = x =\sum_{i=0}^{m-1} \mu_{i}2^i 其中\mu_{i} \in {0,1}\)
\(n-1 = 2^k -1 = \sum_{i = 0}^{k-1} 2^i\)
\((n-1) & x\) = \((\sum_{i = 0}^{k-1}2^i)\) & \((\sum_{i=0}^{m-1}\mu_{i}2^i)\) (1)
若 \(k < m\), \((1)式 = \sum_{i=0}^{k-1}\mu_{i}2^i\)
若 \(m \leq k\), \((1)式 = \sum_{i=0}^{m-1}\mu_{i}2^i\)
\(x mod n\) = \(x - [\frac{x}{n}]* n\) = \((\sum_{i=0}^{m-1} \mu_{i}2^i)\) - \(\frac{\sum_{i=0}^{m-1}\mu_{i}2^i}{2^k} * 2^k\) (2)
当 \(k < m\),即为 x > n , \((2)式\) =\((\sum_{i=0}^{m-1} \mu_{i}2^i)\) - \(\sum_{i = k}^{m-1} \mu_{i} 2^i\) = \(\sum_{i=0}^{k-1} \mu_{i} 2^i\)
当 \(m \leq k\), 即为 x < n, \((2)式\) = \(x = \sum_{i=0}^{m-1}\mu_{i}2^i\)
综上 \((n - 1) & hash = hash & (n - 1) = hash % n\) (异或满足交换律不证)
- 程序暴力证明
public class Test {
public static void main(String[] args) {
int a,b,n,x;
for (n = 16; n <= (1<<30); n<<=1){
if(n < 0) break;
x = 0;
do{
a = (n - 1) & x;
if(n == 0) {
System.out.println("x = " + x);
}
b = x % n;
if(a != b){
System.out.println("出错了!"+ "x = " + x + " n = " + n);
System.out.println("a = " + a + " b = " + b);
}
x++;
if (x < 0 ) break;
}while(x <= Integer.MAX_VALUE);
}
}
}
- 结果 : 无任何输出哦!运行比较慢哦,因为时间复杂度为:30 * Integer.MAX_VALUE,大概要算 21亿 * 30 约为 630亿次。
数据结构
前面一画出数据结构图形,可以看出,jdk1.8对HashMap做了部分优化,将原来的数组+链表中的长链表优化为红黑树。具体来讲,在jdk1.8.0_231源码(其他jdk8版本也是)中,当链表长度等于8是,就将其整体树化为红黑树。
再来思考这样一个问题:当红黑树删除了大量节点,节点数少于8个时,会主动退化为树吗,还是继续保持树形?如5个节点时,是树还是链表?当退化为两个节点时(树,链表 都是线性的样子),这时又加入一个节点,是形成树,还是遵循此时小于8个节点,用链表解决冲突?【作为面试题不错,你不是说你读过源码吗?呵呵】
冲突处理
根据HashMap的数据结构,可以看出HashMap采用链地址法来解决冲突,如果节点hash相同,key不同,则利用链表或红黑树来存储hash冲突的节点。
扩容机制
jdk1.8的扩容机制,又时充满大量的tricks哦,叹服类库设计者的功力,给类库设计者跪了。996,赶工期,是不可能设计出如此巧妙,代码如此优雅的艺术品的。在分析jdk1.8.0_231版本之前,有必要分析以下jdk1.8前的源码
- jdk1.8前的版本的扩容机制resize函数
// 重新调整HashMap的大小,newCapacity是调整后的单位
void resize(int newCapacity) {
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return;
}
// 新建一个HashMap,将“旧HashMap”的全部元素添加到“新HashMap”中,
// 然后,将“新HashMap”赋值给“旧HashMap”。
Entry[] newTable = new Entry[newCapacity];
transfer(newTable);
table = newTable;
threshold = (int)(newCapacity * loadFactor);
}
// 将老HashMap中的全部元素都迁移到新的tabl中去
void transfer(Entry[] newTable) {
Entry[] src = table;
int newCapacity = newTable.length;
for (int j = 0; j < src.length; j++) {
Entry<K,V> e = src[j];
if (e != null) {
src[j] = null;
do {
Entry<K,V> next = e.next; //得到下一个节点
int i = indexFor(e.hash, newCapacity); //根据hash,重新计算元素e在新容量下的索引i
// 利用头插入迁移,(原来的链表被翻转了,头节点变尾节点,尾节点变成头节点)
e.next = newTable[i];
newTable[i] = e;
//e指针指向原链表的下一个节点
e = next;
} while (e != null);
}
}
}
综上,jdk8以前的扩容后新容量为老容量的2倍,将原HashMap中的table元素迁移到新的table中,重新计算原table每个元素在新table中的索引,且原链表反转连接在新的table后。总之,设计的很粗糙,一旦需要扩容,整体的数据+链表的迁移工作设计的很朴素直观,没啥技巧,工作量大。
- jdk8扩容函数resize()源码分析
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
//
if (oldCap > 0) {
//如果原容量大于等于设置的最大容量 1<<30
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE; //设置threshold = Integer.MAX_VALUE
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && // 新容量 = 2* 老容量
oldCap >= DEFAULT_INITIAL_CAPACITY) //
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
//容量等于0,门限也等于0,容量初始化为16,扩容的thredshold设置为 16 * 3 / 4 = 12
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY; //16
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
// 将原table迁移到新table
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e; //e在新table中的索引等于原table中的索引或e在新table中的索引是原table中的索引的2倍
else if (e instanceof TreeNode) //如果e是TreeNode节点,树的迁移交给split
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order//如果是链表,迁移后保持链表的顺序(jdk1.8前是逆序)
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) { //说明e在新table中的的索引和原索引一致,下面给出定性分析
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else { //说明e在新table中的索引=原索引+oldCap
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
对于新索引的计算解释:
- 为什么新索引一定等于原索引 或 新索引 = 原索引 + oldCap?
首先这一切的精巧设计都来源于 HashMap容量的设计一定为2的幂次方,表现在二进制源码上,即为有且仅有一位1bit位,其他所有位置为0,当capacity -1 时,高位1变0,其他所有低位变为1,正是这样一个原因,才会有新老索引之间的数值关系。我们首先给出一个直观上的例子,请读者体会hash与新老capactiy-1异或的操作,然后给出不严格的数学证明。
例子:如e1.hash = 8(00001000), e2.hash = 17(00010001),oldCap = 16(00010000), newCap = 32(00100000).
则 oldCap - 1 = (00001111); newCap -1 = (00011111);
e1.hash & (oldCap - 1) = (00001000) & (00001111) = (00001000) = 8;
e2.hash & (oldCap - 1) = (00010001) & (00001111) = (00000001) = 1;
e1.hash & (newCap -1) = (00001000) = 8 = 原索引;
e2.hash & (newCap -1) = (00010001) = 17 = 原索引 + oldCap = 1 + 16 = 17;
不知道读者朋友没有发现 e.hash & oldCap 是否等于 0 可以判断出,在新索引 = 原索引 还是 新索引 = 原索引 + oldCap !
e1.hash & oldCap = (00001000) & (00010000) = (000000000) = 0;
e2.hash & oldCap = (00010001) & (00010000) = (000100000) = (00010000) = 16 = 原索引 + oldCap;
因为oldCap的高位1 和 newCap - 1 的高位1 在同一位置,newCap -1 的最高位1之后全为1,而oldCap高位1之后全为0。如果e.hash & oldCap == 0,说明 e.hash & (newCap -1) 在newCap -1 最高1的比特位置一定为0。oldCap-1 和 newCap -1 除了 newCap -1 的最高位为1对应 oldCap - 1的0之外,剩下的低位二者完全相同,都为1,故,易得 在 e.hash & oldCap == 0时,必有,新索引 = 老索引。
若,e.hash & oldCap != 0,说明 e.hash & (newCap -1) 在 newCap -1 最高1的比特位置一定为1。oldCap-1 和 newCap -1 除了 newCap -1 的最高位为1对应 oldCap - 1的0之外,剩下的低位二者完全相同,都为1,故,易得 在 e.hash & oldCap != 0时,新索引 = 原索引 + oldCap.上面的定性分析需要读者朋友在脑子中模拟比特的运算,不然看起来很懵哦。
下面给出简单证明,不妨设:
e.hash & (oldCap -1) = x,newCap = oldCap << 1
证明:e.hash & (newCap - 1) = x or (x + OldCap)
证明:
不妨设 \(e.hash = \sum_{i = 0}^{m-1}\mu_{i}2^i\), 其中\(\mu_{i} \in \{0,1\}\)
\(oldCap = 2^k => oldCap - 1 = 2^k -1 = \sum_{i=0}^{k-1} 2^i\)
\(newCap = 2^{k+1} => newCap -1 = 2^{k+1} -1 = \sum_{i=0}^{k}2 ^i\)
e.hash & (oldCap -1) = \(\sum_{i = 0}^{m-1}\mu_{i}2^i\) & \(\sum_{i=0}^{k-1} 2^i\) = x
e.hash & (newCap -1) = \(\sum_{i = 0}^{m-1}\mu_{i}2^i\) & \(\sum_{i=0}^{k}2 ^i\)
= \(\sum_{i = 0}^{m-1}\mu_{i}2^i\) & \((\sum_{i=0}^{k - 1}2 ^i + 2^k)\)(2)
if m <= k , (2)式 = \(\sum_{i = 0}^{m-1}\mu_{i}2^i\) & \((\sum_{i=0}^{k - 1}2 ^i)\) = x
if m > k, 在 \(\mu_{k} = 1\) 时, (2)式 = \(\sum_{i = 0}^{m-1}\mu_{i}2^i\) & \((\sum_{i=0}^{k - 1}2 ^i)\) + \(2^k\) = x + oldCap ,
在 \(\mu_{k} = 0\) 时, (2)式 = \(\sum_{i = 0}^{m-1}\mu_{i}2^i\) & \((\sum_{i=0}^{k - 1}2 ^i)\) = x.
综上,得证。
根据上面的证明,我们可以总结出一下结论:
当 m <= k 必有 \(\mu_{k} = 0\) 新索引 = 原索引;
当 m > k且\(\mu_{k} = 0\)时,新索引 = 原索引;
当 m > k 且 \(\mu_{k} = 1\)时,新索引 = 原索引 + oldCap;
故在代码考虑充分条件即可,即考虑 \(\mu_{k}\)是否等于0 ,体现在代码上就是e.hash & oldCap
是否等于0
简简单单的一行代码是如此的漂亮呀。
7. HashMap示例
package collectionlearn;
import java.util.*;
public class HashMapTest {
public static void main(String[] args) {
HashMap<Integer,String> hashMap = new HashMap<>();
hashMap.put(1,"hello");
hashMap.put(2,"world");
hashMap.put(3,"changgui");
System.out.println(hashMap.put(4,"seu"));
hashMap.put(3,"ahpu");
System.out.println(hashMap.get(2));
Set<Integer> keys = hashMap.keySet();
System.out.println(keys);
Collection<String> values = hashMap.values();
System.out.println(values);
Set<Map.Entry<Integer,String>> entrySet = hashMap.entrySet();
for(Map.Entry<Integer,String> kv: entrySet){
System.out.println(kv.getKey() + " " + kv.getValue());
}
}
}
8. 面试session
-
谈谈你对HashMap的认识?
(1) jdk1.8前,HashMap的数据结构为数组+链表,jdk1.8时,HashMap的数据结构是数组+链表+红黑树。具体来讲,当链表的长度大于8时,链表就会可能转成红黑树,因为在转成红黑树还会判断其容量是否小于64,若小于64,会先扩容,
扩容后仍然碰撞,才会树化为红黑树;
(2) HashMap的扩容机制,简单来讲,HashMap有三个和扩容相关的字段,capacity , threshold, loadfactor,当 size > threshold = capacity * loadfactor 时,才会扩容两倍;
(3) HashMap时线程不安全的,或用其他并发多线程安全类或外部加锁或是Collections.synchronizedMap(new HashMap(...)); -
谈谈哈希冲突的解决方案?
开放定址法: 一旦发生了冲突,就去寻找下一个空的散列地址,只要散列表足够大,空的散列地址总能找到,并将记录存入
链地址法: 将哈希表的每个单元作为链表的头结点,所有哈希地址为i的元素构成一个同义词链表。即发生冲突时就把该关键字链在以该单元为头结点的链表的尾部。
再哈希法: 当哈希地址发生冲突用其他的函数计算另一个哈希函数地址,直到冲突不再产生为止。
建立公共溢出区: 将哈希表分为基本表和溢出表两部分,发生冲突的元素都放入溢出表中。