LRU Cache

LRU Cache

Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get and set.

get(key) – Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
set(key, value) – Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.

Solution: Use a doubly linked list, with a head and tail node pointers, as well as a hashMap with all the key and node pairs.

1. When get(), it first checks if the key is in the map, if it is, remove that node from its original position and move to the tail (right before the tail node). if it doesn’t exists, return -1.

2. When set(key, value), it first call get function, to see if it exists, them update the value, otherwise, check if the queue is full, then remove the first element (the one right after head node) and add the new node to the tail.

Set(), Get() O(1) Time.

public class Solution {
    public class Node {
        int key;
        int value;
        Node prev;
        Node next;
        public Node(int key, int value) {
            this.key = key;
            this.value = value;
            this.prev = null;
            this.next = null;
        }
    }

    Map<Integer, Node> cache;
    int capacity;
    Node head;
    Node tail;

    public Solution(int capacity) {
        cache = new HashMap<>();
        this.capacity = capacity;
        head = new Node(0, 0);
        tail = new Node(0, 0);
        head.next = tail;
        tail.prev = head;
    }

    public int get(int key) {
        if (cache.containsKey(key)) {
            //remove it from its original place
            //put to the tail the queue
            Node node = cache.get(key);
            removeNode(node);
            addToTail(node);
            return node.value;
        }
        return -1;
    }

    public void set(int key, int value) {
        if (get(key) != -1) {
            cache.get(key).value = value;
            return;
        }
        Node newNode = new Node(key, value);
        if (cache.size() < capacity) {
            addToTail(newNode);
        } else {
            cache.remove(head.next.key);
            removeNode(head.next);
            addToTail(newNode);
        }
        cache.put(newNode.key, newNode);
    }

    public void addToTail(Node node) {
        node.prev = tail.prev;
        node.prev.next = node;
        node.next = tail;
        tail.prev = node;
    }

    public void removeNode(Node node) {
        if (node.prev != null) {
            node.prev.next = node.next;
        }
        if (node.next != null) {
            node.next.prev = node.prev;
        }
        node.next = null;
        node.prev = null;
    }
}

 

FacebookTwitterGoogle+Share

Min Stack

Min Stack

Implement a stack with min() function, which will return the smallest number in the stack.

It should support push, pop and min operation all in O(1) cost.

Example

Operations: push(1), pop(), push(2), push(3), min(), push(1), min() Return: 1, 2, 1

Note

min operation will never be called if there is no number in the stack

Solution: 用一个minStack维护最小值,保存每一个状态的最小值。当新来的数比当前的栈顶小的时候,也就是说这个数是最小值,push to minStack,pop的时候要看pop的这个数是不是和minStack的栈顶一样,也就是这个数是最小数,stack.pop()的同时要minStack.pop(). 注意duplicates numbers, 所以push一个和当前最小数一样大的数时,也需要push to minStack.

这里的时间复杂度是平均时间复杂度。

public class Solution {
    Stack<Integer> minStack;
    Stack<Integer> stack;
    public Solution() {
        stack = new Stack<>();
        minStack = new Stack<>();
    }

    public void push(int number) {
        stack.push(number);
        if(minStack.isEmpty()){
            minStack.push(number);
        }else{
            if(minStack.peek()>=number){
                minStack.push(number);
            }
        }
    }

    public int pop() {
        if(stack.peek().equals(minStack.peek())){
            minStack.pop();
        }
        return stack.pop();
    }

    public int min() {
        return minStack.peek();
    }
}