Deep dive on how static files are served with HTTP (kernel, sockets, file system, memory, zero copy)

preview_player
Показать описание
In this video I do a deep dive on how serving static files work in web servers.

0:00 Intro
2:00 Overview
3:00 Request handling and Receive Queue
8:50 Reading file from disk
13:50 Response and the Send Queue
24:00 Sending Response to the Client

Discovering Backend Bottlenecks: Unlocking Peak Performance

Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon)

Fundamentals of Networking for Effective Backends udemy course (link redirects to udemy with coupon)

Fundamentals of Database Engineering udemy course (link redirects to udemy with coupon)

Follow me on Medium

Introduction to NGINX (link redirects to udemy with coupon)

Python on the Backend (link redirects to udemy with coupon)

Become a Member on YouTube

Buy me a coffee if you liked this

Arabic Software Engineering Channel

🔥 Members Only Content


🏭 Backend Engineering Videos in Order

💾 Database Engineering Videos

🎙️Listen to the Backend Engineering Podcast

Gears and tools used on the Channel (affiliates)

🖼️ Slides and Thumbnail Design
Canva


Stay Awesome,
Hussein
Рекомендации по теме
Комментарии
Автор

fundamentals of backend of engineering course

hnasr
Автор

I like to digest the information as slow as possible and your explanations are what I love to watch. Thanks for being slow.

Slow is smooth, smooth is fast.

gorangagrawal
Автор

Man. This came out the hour after I stopped working on my side project to learn the first principles of how HTTP and node really work... without all the fancy abstractions from the libraries.

imanmokwena
Автор

Aah, my favourite 1.5x playback speed guy

juniordevmedia
Автор

Very good walkthrough! I like things that way. It builds intuition around the subject, and induce to think better on elements and problems

andresroca
Автор

Would love a video on io_uring. Epoll doesn't have to be chatty, as you can let the process block until a fd is ready, but you do a lot of syscalls, which is the thing io_uring gets rid of the most. Currently looking into registered buffers which if I understand correctly can eliminate a copy as the kernel can theoretically place socket data directly in your buffer, after it assembles packets of course. No idea yet if it actually does or not

ryanseipp
Автор

For me it has been mostly been about the basics, RAM v/s Disk and SSL termination, those are the bottlenecks in simple content websites with huge traffic. The disk/RAM control Varnish Cache offers is great IF there is ever a need for it. There is always RAM disk too. Add CloudFlare on top of that.

prathameshgharat
Автор

Very good, look forward to more videos

thewave
Автор

Your content is so awesome Hussein. When watching this video, I have a question about throughput and latency with chunky streaming (like websocket because it use http underline). My question is whether chunky message affects the total latency. For example, the total latency between sending large file bytes in one websocket message and sending large file bytes in multiple websocket message back to back immediately (chunky). Thank you

nhancu
Автор

How are huge contents served? Suppose a huge json file is the response to the http request?
What I am asking is, does the socket cache start sending the packets before the node process finishes writing to the file cache?
Also, how big is the file cache?

vivkrish
Автор

Awesome
One question - what is the lifecycle of those read/write queues ? I suppose they live in the server memory, but on what point are they being destroyed ? Do they live there for one request/response cycle ?

ivankraev
Автор

I guess, one question is. Is this the same on Windows servers?

SeunA-srss
Автор

What if the server (user process) read from disk on server startup (before receiving any requests) and pre-processed the file content (for headers) and pre-compressed the file content.


This way, we'd save the time necessary for the read syscalls, writing headers, compressing content, etc.

Just receive the request at the user process and directly send the syscall to respond directly.

Would that be possible?

MinatoCreations
Автор

This covers Caching at the User process (webserver) scenario. How does this translate when a Reverse Proxy is inserted into the mix? Does the Reverse Proxy perform a READ to it's own cached disk looking for the file? or does it have an implementation of a GET that evaluates whether the request can be served locally rather than reaching out to a backend webserver?

rodstephens
Автор

Can you show the source code of how write buffet read file is actually sync async in kernel and nodejs so this would really sink in my memory?

WeekendStudy-xolq
welcome to shbcf.ru