Redis system design | Distributed cache System design

preview_player
Показать описание
This video explains how to design distributed cache system like Redis/Memcache
This is one of the famous Amazon interview question.

How to distribute nodes?
Answer: using consistent hashing.

Apart from LRU you can use Countmin sketch.
to calculate frequency of key accessed use Countmin sketch

----------------------------------------------------------------------------------------------------------------------------------------------------------
LRU Code by Varun Vats:
Рекомендации по теме
Комментарии
Автор

If you're aware of how caching works and LRU and came here curious about the design of how distributed caches work, start at 23:42 . And very well put content, as always :)

shreyasahu
Автор

You are a Hero, man! These are all the topics that we actually need. And sadly find no decent tutorial videos online! Thanks.

NitishSarin
Автор

Given the time you were on the spot, my expectation was higher! I was more expecting HA & DR as well..! As usual, you are awesome.

priyakantpatel
Автор

The best thing i like in this course is your style of wearing the cap in presentation.

hasifkhan
Автор

Thank you so much for giving a lot around thought process and guiding the though process while designing complete system around any concept/problem. Thank you. Your in detailed content be the saviour for me in lead interview. Thank you.

pranaypatadiya
Автор

Thanks for the explanation, helped a lot!

talivanov
Автор

Thanks for this! Great content across your channel.. I feel stronger on system design :)

thmoeboa
Автор

who else noticed 'cashing' in the beginning whiteboard ?


Great content bro

_sudipidus_
Автор

Nice explanation. Thanks for the awesome videos. :)

ankitapriya
Автор

Again Very nice Video.. thanks sir :)... waiting more videos like this....

springtest
Автор

Thanks a lot +Narendra. This is an invaluable video.

nishadkumar
Автор

Wow!! One video and you cleared thousands of doubts !!

manaligadre
Автор

It misses very important part: how does LRU eviction work ? In your design we add to cache as many items as we want leading to out of memory. I think you shall have mentioned that LRUs are limited either by the number of elements or their total size. If adding a new element would exceed limit then we need to evict one element(*) and we pick this element from the front of linked list and based on it's key we remove entry from hash table and delete node in linked list. * - if w limit LRU by memory size we might keep removing elements till memory goes back below threshold.

michahubert
Автор

ARC is probably the best cache eviction algorithm as it takes care of both recency and number of hits. Its generally not seen in open source as IBM holds a patent for the algo.

bhatanand
Автор

thanks bro!! it was an awesome video, i was really searching for an good caching video!!, it was explanatory!!

piyushyadav
Автор

Awesome it will be great help if you can talk more about LFU thanks in Advance :)

mayankrathore
Автор

Thanks for the video tutorials, thats first :), but I still hv some questions/comments, may b i am wrong, but let me tell u that..plz correct me if i m wrong...1. Not clear - how ur design got impacted with the estimation that u made at first, how ur design could get changed in case of 5 request/sec tps 2. How a value in hastable results into constant time lookup in linked list 3. Only apart from the last part, all other areas are also applicable for monolithic system., so it seems like that distributed philosophy is needed to be adderessed more

achatterjee
Автор

Very nicely explained. Could you come up with a video on 'Designing Distributed Scheduler' ?

abhishek
Автор

Liked your content. Partly the content is inspired (I don't mean copy paste here) by Cassandra architecture. Or in other words, we can always draw parallel (conceptually) with these distributed architecture frameworks. Thanks for raising my awareness.

contactnarita
Автор

Good stuff, thank you.
One more thing we can do to improve resiliency when downstream services are unavailable is to use two TTLs (Time to live) : a SOFT TTL and a HARD TTL.

Eggeater