460. LFU Cache
Design and implement a data structure for Least Frequently Used (LFU) cache. It should support the following operations: get
and put
.
get(key)
- Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.put(key, value)
- Set or insert the value if the key is not already present. When the cache reaches its capacity, it should invalidate the least frequently used item before inserting a new item. For the purpose of this problem, when there is a tie (i.e., two or more keys that have the same frequency), the least recently used key would be evicted.
Follow up:
Could you do both operations in O(1) time complexity?
Example:
LFUCache cache = new LFUCache( 2 /* capacity */ ); cache.put(1, 1); cache.put(2, 2); cache.get(1); // returns 1 cache.put(3, 3); // evicts key 2 cache.get(2); // returns -1 (not found) cache.get(3); // returns 3. cache.put(4, 4); // evicts key 1. cache.get(1); // returns -1 (not found) cache.get(3); // returns 3 cache.get(4); // returns 4
Two HashMaps are used, one to store <key, value> pair, another store the <key, node>.
I use double linked list to keep the frequent of each key. In each double linked list node, keys with the same count are saved using java built in LinkedHashSet. This can keep the order.
Every time, one key is referenced, first find the current node corresponding to the key, If the following node exist and the frequent is larger by one, add key to the keys of the following node, else create a new node and add it following the current node.
All operations are guaranteed to be O(1).
I use double linked list to keep the frequent of each key. In each double linked list node, keys with the same count are saved using java built in LinkedHashSet. This can keep the order.
Every time, one key is referenced, first find the current node corresponding to the key, If the following node exist and the frequent is larger by one, add key to the keys of the following node, else create a new node and add it following the current node.
All operations are guaranteed to be O(1).
BB;
Top k elements (ascending order), 用了priorityQueue, 然而access需要遍历,后改为LRU思路,插入的时候sort就行
class LFUCache { HashMap<Integer, Node> map = new HashMap<>(); HashMap<Integer, LinkedHashSet<Node>> fre = new HashMap<>(); int cap; int size; int min = 0; public LFUCache(int capacity) { this.cap = capacity; } public int get(int key) { if (!map.containsKey(key)) { return -1; } Node cur = map.remove(key); int fr = cur.f; fre.get(cur.f).remove(cur); if (fre.containsKey(fr) && fre.get(fr).isEmpty() && min == fr) { min++; fr++; } cur.f++; if (!fre.containsKey(cur.f)) { fre.put(cur.f, new LinkedHashSet<Node>()); } map.put(key, cur); fre.get(cur.f).add(cur); return cur.v; } public void put(int key, int value) { if (cap < 1) { return; } if (map.containsKey(key)) { map.get(key).v = value; get(key); } else{ if (size >= cap) { Node discard = fre.get(min).iterator().next(); map.remove(discard.k); fre.get(min).remove(discard); size--; } Node cur = new Node(key, value, 1); map.put(key, cur); if(!fre.containsKey(1)) { fre.put(1, new LinkedHashSet<>()); } fre.get(1).add(cur); min = 1; size++; } } } class Node { int k; int v; int f; Node(int k, int v, int f) { this.k = k; this.v = v; this.f = f; } }