filmov
tv
Implement LRU Cache | SWE Interview #19

Показать описание
Implementing an LRU cache is a classic Leetcode problem. You may only ever have to implement a cache from scratch (i.e. no redis or memcached) a few times throughout your SWE career. However, interviewers love to ask candidates to implement a LRU/LFU cache in interviews. It tests a lot of concepts simultaneously and a cache is an immensely useful, common architectural component used in production-ready applications, particularly distributed systems, to reduce the time taken to access particular resources or data.
This interview is no different. It is the second in a series of interviews that day. The interviewer opens by asking the candidate if they are familiar with a “least recently used” (LRU) cache. The candidate incrementally tests their code and writes comments to give more details about their designs and the time complexity of their methods. The main advantage of using a cache is to have constant time (in terms of asymptotic complexity) access to the items stored within it. Getting a value from a cache typically becomes expensive in the case of a cache-miss, which typically occurs if the item had not yet been put into the cache, or it was popped out due to the cache invalidation strategy (it expired) or it became full.
This interview steps through each fundamental concept to understand about a cache and how to build one. The candidate suggests using an ordered dictionary to ease with the implementation of the cache, but the interviewer asks for a more barebones implementation not relying on the more powerful built-in data structures provided by the Python standard language, such as an OrderedDict.
The interview ends with the interviewer discussing their day-to-day work at a FinTech firm. Despite the solid progress made to implement the cache, this candidate did not move on to an onsite interview with this company.
Chapters:
0:00 Introduction
0:33 Problem Statement
1:16 Coding - LRU Cache
10:00 Coding - OrderedDict
20:17 Company Questions
If you like the video, please like and subscribe to the channel! If you want to see an interview exploring a particular topic or question, please comment below!
#leetcode #hashmap #cache #dictionary #python #dsa #lru
This interview is no different. It is the second in a series of interviews that day. The interviewer opens by asking the candidate if they are familiar with a “least recently used” (LRU) cache. The candidate incrementally tests their code and writes comments to give more details about their designs and the time complexity of their methods. The main advantage of using a cache is to have constant time (in terms of asymptotic complexity) access to the items stored within it. Getting a value from a cache typically becomes expensive in the case of a cache-miss, which typically occurs if the item had not yet been put into the cache, or it was popped out due to the cache invalidation strategy (it expired) or it became full.
This interview steps through each fundamental concept to understand about a cache and how to build one. The candidate suggests using an ordered dictionary to ease with the implementation of the cache, but the interviewer asks for a more barebones implementation not relying on the more powerful built-in data structures provided by the Python standard language, such as an OrderedDict.
The interview ends with the interviewer discussing their day-to-day work at a FinTech firm. Despite the solid progress made to implement the cache, this candidate did not move on to an onsite interview with this company.
Chapters:
0:00 Introduction
0:33 Problem Statement
1:16 Coding - LRU Cache
10:00 Coding - OrderedDict
20:17 Company Questions
If you like the video, please like and subscribe to the channel! If you want to see an interview exploring a particular topic or question, please comment below!
#leetcode #hashmap #cache #dictionary #python #dsa #lru
Комментарии