LRU Cache实现
LRU Cache
描述:
Design and implement a data structure for Last Recently Used(LRU) Cache. It should support the following operations: get and set.
get(key): Get the value(will always be positive) of the key if the key exists in the cache, otherwise return -1.
set(key, value): Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.
分析:
为了使查找、插入和删除都有较高的性能,使用一个双向链表(std::list)和一个哈希表(std::unordered_map),因为:
1. 哈希表保存每个节点的地址,可以基本保证在O(1)时间内查找节点。
2. 双向链表插入和删除效率高,单向链表插入和删除时还要查找节点的前驱节点。
具体实现细节:
1. 越靠近链表头部,表示节点上次访问距离现在时间最短,尾部的节点表示最近访问最少。
2. 访问节点时,如果节点存在,把该节点交换到链表头部,同时更新哈希表中该节点的地址。
3. 插入节点时,如果cache的size达到了上限capacity,则删除尾部节点,同时要在哈希表中删除对应的项。新节点插入到链表头部。
代码:
class LRUCache{
private:
struct CacheNode{
int key;
int value;
CacheNode(int k, int v) : key(k), value(v){}
};
public:
LRUCache(int capacity){
this->capacity = capacity;
}
int get(int key){
if (cacheMap.find(key) == cacheMap.end()){
return -1;
}
// 把当前访问的节点移到链表头部,并且更新map中该节点的地址
cacheList.splice(cacheList.begin(), cacheList, cacheMap[key]);
cacheMap[key] = cacheList.begin();
return cacheMap[key]->value;
}
void set(int key, int value){
if (cacheMap.find(key) == cacheMap.end()){
if (cacheList.size() == capacity){ //删除链表尾部节点(最少访问的节点)
cacheMap.erase(cacheList.back().key);
cacheList.pop_back();
}
//插入新节点到链表头部,并且在map中增加该节点
cacheList.push_front(CacheNode(key, value));
cacheMap[key] = cacheList.begin();
}
else{
//更新节点的值,把当前访问的节点移到链表头部,并且更新map中该节点的地址
cacheMap[key]->value = value;
cacheList.splice(cacheList.begin(), cacheList, cacheMap[key]);
cacheMap[key] = cacheList.begin();
}
}
private:
int capacity;
std::list<CacheNode> cacheList;
unordered_map<int, std::list<CacheNode>::iterator> cacheMap;
};