We ACTUALLY downloaded more RAM

preview_player
Показать описание


It is no longer just a joke. You can download more RAM! Join us on this foolish escapade to make a system with unholy amounts of memory.

Buy G.SKILL Ripjaws V 16GB (2x8GB) DDR4 3200

Buy G.SKILL Trident Z Neo 32GB (2x16GB) DDR4 3600

Buy Kingston FURY Renegade 32GB (2x16GB) DDR4 3600

Purchases made through some store links may provide some compensation to Linus Media Group.

FOLLOW US ELSEWHERE
---------------------------------------------------

MUSIC CREDIT
---------------------------------------------------
Intro: Laszlo - Supernova

Outro: Approaching Nirvana - Sugar High

CHAPTERS
---------------------------------------------------
0:00 Intro
0:31 How it works
1:39 Memory hierarchy
2:17 Linux super power
3:19 Why won't Google let us?
4:11 Our solution
5:18 aaaand it Crashed
5:50 Why this is dumb
7:05 Using it how it's intended
10:27 outro
Рекомендации по теме
Комментарии
Автор

We went from always having virtual memory to "what's virtual memory?" within like a decade. Wild

tassaron
Автор

With Linus it's never "click-bait". It's more like "click-ahhhh..ok..makes sense"

BreakingPintMedia
Автор

So even LINUS knows the pillow is over priced! I knew it!

TheRipeTomatoFarms
Автор

The fact that hp and Dell actually sell laptops with only 4g of ram to consumers that mostly have no idea what ram actually is amazes me.

licktastic
Автор

with the current prices of GPUs, I’d love to download one

JimiArchive
Автор

You might have missed the "vm.swappiness" kernel variable... It kinda controls how likely the kernel is to move data into the swap file/partition.
(btw, no - i'm not recommending actually using swap as part of your system memory, I'm just adding a little thing you could've tested)

mini_bomba
Автор

5:50 You might be able to do swap on network drive if you increase /proc/sys/vm/min_free_kbytes to reserve enough buffer memory to not hang while trying to allocate buffers for pushing bytes to network drive. Maybe also increase /proc/sys/vm/swappiness and to make kernel more aggressive pushing the data to swap. And you could also try increasing /proc/sys/vm/page-cluster to somewhere in range 6-10 so move bigger blocks at once - trying to move 4 KB blocks with random access would hit hard.

Swapping to remote network drive could make sense on L6 or L7 level. Before that you should swap to zram, SSD, HDD and fall back network drive only after every other swap device is full. Just set slower swap storage with lower priority and it will not use if there's any free space on any higher priority storage. Of course, as you correctly figured out, getting anything back from network drive would hurt you really bad. Like getting data back may take something like 100 KB/s.

MikkoRantalainen
Автор

i'm sorry, but these sponsor segues are so funnily predictable

ilovetrainsscr
Автор

You could’ve set the swappiness to something like 99 instead of the default 60. That would’ve put stuff into swap at 1% ram usage (instead of the default 40%)

koevoet
Автор

I remember setting up a page file back in windows 95 on an extra 500Mb drive. It actually sped up my system quite a bit. But this was back when 32Mb RAM was considered a good system.

TheInternetHelpdeskPlays
Автор

Linus doesn't need stock videos or photos to explain something because his crew does it for him and it's brilliant

jkengland
Автор

There were programs that we installed on our system in the '90s such as "virtual ram" that worked without crashing your system. I imagine the single processor systms and operating systms of those days were so slow that it was not as big of deal (they did mention that while improving your multitasking, your computer would take a speed bump.). In those days we were used to waiting for things to load, so it was quite tolerable.

MichaelSidneyTimpson
Автор

When I was doing my masters degree one of the teachers actually mentioned that one of the other teachers did this back in the 80's when he was a student.
You could request RAM from a mainframe server that was running at the college.
Latency was bad but at least you got RAM.

andresilvasophisma
Автор

I did something like this in my PHD. Several things you might want to try:
1. Using cgroup to explicitly control processes can eliminate most of the crashes and it improves the performance of the system overall.
2. Using Intel Optane (even the low-end 16G model) as swap is much faster than swapping to a local SSD. A lot of larges models that needs tens of TBs of memory relies on Optanes.
3. The performance of swapping inside a VM is better than swapping outside a VM.

entropyxu
Автор

This reminds me of the scammy “memory enhancement software” in early days, some of them actually used LZ77 compression for user space memory, but most of them only changed the size of the page file.

pi_xi
Автор

At 5:50 -- This setting can be tweaked around by the "Swapiness" value. It essentially tells the kernel how likely it should use Swap. Since Linux also does eager swapping to clear out *actual* RAM, this is quite a handy feature.

yushie
Автор

Scientific tasks are actually an area where swap, especially tiered swap, really shines. Specifically if the modeling software needs to generate a huge dataset but only rarely needs to look back at data from near the start of the simulation.

I sometimes do computational fluid dynamics (CFD) on systems where the working set of data reaches into the low 100GB range, on a system with 32 GB of ram. 20GB is enough to hold the volatile working set easily, so I actually configure it to use the next 12GB as compressed ram swap (zram), which manages a 3:1 ratio, and trades CPU time to still have low latency. This gets me to 56GB of effective ram, before touching the next storage system. For a while I actually then layered my GPUs VRAM in, as it is higher performance than my SSD, and it's a 16GB card, with zram on top of it that's an extra 45GB (leave a bit to actually drive your display), for a total of 101GB before you have to touch a disk. (Unfortunately, the GPU userspace memory system has gotten a bit unstable, so I had to drop this level). Data which doesn't compress well, or the oldest data once the compressed space is full, gets written out to disk, needing between 10 and 30 GB for most of my projects.

If you are using the system this way, you need to set up memory cgroups or you're in for a bad time.

lperkins
Автор

Reminds me of "no replacement for displacement"; at the level of performance we expect today, you can't really cheat your way out of using the correct hardware.

anezay
Автор

LTT is a god at roasting 6:33
The way his expression never changed
While he casually mentions where everyones dad is.

Ichwiebrot
Автор

is it me or the lighting and colouring on this video is ON POINT today? it looks better than normal

tld