A Video About Queues

preview_player
Показать описание
Thank you Sam for another phenomenal post. This one was so so so good

S/O Ph4se0n3 for the awesome edit 🙏
Рекомендации по теме
Комментарии
Автор

We don't have to care about dropping requests, because our app with only <10 users will never hit this limits

davidsiewert
Автор

this was the first time I had fun reading and understanding a blog post.

abhisheknavgan
Автор

"... unless it was Jira and it was my job..." 🤣

danielgilleland
Автор

The amount of care and polish Sam puts into his posts cannot be understated. Really solid stuff!

bone_broth_
Автор

Main thing I am taking away from this video...

Sam is a FKKING LEGEND !!!

rashim
Автор

I wrote a queuing system some years ago that allowed a mix of all of those strategies plus some other features. It was for data processing, though. I was converting a large, monolithic batch process (a couple hours to run) to be parallel and closer to realtime (sub second.)

mduvigneaud
Автор

As you mentioned in passing, it would seem trivial to add an initial processing branch that filters "expired/timed-out" requests providing an early exit, and rapid dequeuing of these abandoned requests. Similarly, why isn't there a comparison and drop on enqueue, that prioritizes oldest entries, which at worst would restart the TTL for its queue slot. Yet another thought, dequeue capacity is the metric to gauge - how long does a request take to process? Once that time, multiplied by the depth has elapsed, the system has gone pear shaped and additional capacity should be allocated.

I am reminded of all the priority queuing knobs at the network layer (including WRED and RED) that exist but are both largely unused on the "public" net, and used only in limited fashion on corporate networks, to accommodate RTP media traffic. It was telling that in the US and parts of Europe, ISPs strongly preferred simply adding capacity, rather than tinker with queuing. This was in large part because non-FIFO queues were hard to reason about, and turning the knobs only makes a difference on the edge cases - and when I say edge cases, think about the last person being shoved onto the subway by a train attendant in Tokyo. If you wanted a better "experience" then adding capacity, i.e. ability to handle/dequeue requests, is the only game in town.

One more thought; Fry's Electronics utilized a single queue for its dozens of checkout counters - effectively a shared scalable processing queue. This was effectively human scale load balancing.

roganl
Автор

We just finished learning priority queues and now working with weighted graphs and trees in school. A lot of people say you don't need to know these things, but they really are used in a lot of applications

kenscode
Автор

I wonder about using multiple queues in parallel? Like the priority queue on 2B2T.

cherubinth
Автор

absolutely loved this, and thank you for introducing me to Sam! all blogs and web based learning material should be as interactive as this!!

thisaintmyrealname
Автор

Theo: Check out Sam!
Me: Sam who ?
Theo: Yeah!

Winnetou
Автор

This reminded me a lot of the visualizations on the Nicky Case blog (particularly, the "build a better ballot" one)

FryGuy
Автор

16:40 "Let's be real a priority request should never take 18 seconds. Like imagine hitting a checkout button, and 18 seconds later you finally get to checkout. That's not acceptable"

Surprisingly, this is the opposite of how I would feel.
While I wouldn't like it, I'm much more forgiving of such delays on stuff like a checkout because at that point I'm committed. I've been trained to not try to refresh the page or hit the button again when doing the scary operations for fear of getting duplicate charges to my cards and what not, so I'm much more likely to actually see the request to completion.

But if I'm getting such delays on the non-important stuff, like simply showing me product pages in the first place or on adding stuff to my cart, then I'm much more likely to bounce off the website entirely since I'm not committed to anything yet.

Primalmoon
Автор

What about if we have the priority queue but when a priority request arrives we would drop a low priority request from the back of the queue, then add it after the last priority request, that if the queue is not filled with priority requests.
I think this may be better then the queues in this article.

IdoAloni
Автор

I don't think FIFO is from the engineering world though, I think it's originally used as an inventory method in retail for example

Paul-dixv
Автор

Queueing is Britain's national sport!

samcalder
Автор

omg the end of the article was so clever

anwiseru
Автор

How about priority + RED, but LIFO for lowest priority? (Or, we could use LIFO within each priority bracket, especially if we have multiple priority levels.)

JJCUBER
Автор

Would be an interesting addition to account for request processing time. E.g. A ping vs something that does a bunch of database stuff / processing.

sarjannarwan
Автор

Was FIFO really a tech thing first? I learned about it for the first time prepping food for freebirds (restaurant)

ryanmatthews