Performance of JavaScript Garbage Collection | Prime Reacts

preview_player
Показать описание
Recorded live on twitch, GET IN

MY MAIN YT CHANNEL: Has well edited engineering videos

Discord

Рекомендации по теме
Комментарии
Автор

Speaking of morbid naming in programming. At one company I worked for we had a method on our objects that allowed them to dispose of themselves. It was named "commit_suicide". It just scheduled the deletion though, so we had another one named "commit_suicide_immediately". Someone had left a comment above the method that read: "life is just too much ;_;"

Hector-bjls
Автор

Elixir also has a generational gc, though it only has two buckets and because of the nature of the system (being heavily concurrent), the major gc only needs to run very seldomly. You have your schedulers running on each core of your CPU each with their own process queue. Each process is allocated a limit of function calls and it has to do it's working and after it runs out the next process in the queue launches. Each process also has its own completely isolated stack and heap.

The minor GC runs and picks out objects that nothing refers to anymore. It's specifically looks for so-called root objects, objects that refer to data in the processes memory then it picks out the data they refer to and places them into a new memory area, after which it deletes the rest. Then the GC goes over the first batch of moved objects in the new one memory area and only moves the data they refer to. The stack of the process is moved to the new memory area after which the garbage collection cycle finishes. Data that survives this procedure is marked so that the next GC cycle will transfer it to the old heap area. It's worth noting that the stack (execution instructions) and the heap (the data) live in the process memory and they are located at the edges of the available memory segment and grow towards one another. If at some point there is no free space left, one of the schedulers will step in and interrupt the execution of code to apply the minor garbage collector. If the data that survives is too large for the process to hold, the new area is increased in size.

A major GC procedure only occurs every 65, 000 standard cycles. Basically during major collection, objects that are still being used are moved to a new area while irrelevant data is removed entirely. At one point erlang had a bug that could crash the process if it was executing for too long due to the GC getting too expensive/the process making too many nif (native interface functions) calls. To solve these issues, a second scheduler type was added; the so-called dirty scheduler. Like other schedulers, each CPU core has its own dirty scheduler but these schedulers all work within a single-shared queue. Processes that are expected to run for a long time, past a timeout, are put into that queue. If the system determines that there isn't enough memory for the process to run and the next GC call might potentially take too long, it'll run in the dirty job queue later. This is where the third GC called the delay GC comes in. It marks the current heap as abandoned and allocates a new area with enough free memory needed for the process plus a bit more. If there still isn't enough memory, another area is allocated and linked to the previous one; these are heap fragments. It repeats this procedure until the process is satisfied. It's actually interesting how it calculates how much memory is needed, it uses the Fibonacci sequence.

The important thing here, is that objects can be checked without suspending the entire system due to the VM having so many isolated processes. In the past when I worked with elixir/erlang, I always use processes as a means of manually allocating and deallocating memory. Basically, you can make it so that what would normally be a long-running process is effectively a chain of short running processes that continually die and pass on their state. In this way you kind of get more control over the GC then with most other languages since killing a process and moving the state allocates a new memory block just as the major gc does. You can also set the gcs to sweep over processes at different time frames. So if you have a process that never needs GC or one that needs it more, you can specify the frequency.

As with most features in js, you have problems with the gcs because you only have the one thread to work with. No matter how well the language is optimized via it's jit and engine, it's never going to overcome that hurtle. They really should just redesign the language to allow it to be multi-threaded and more effecient. It's such a trash fire. I wish Google had strong armed dart into their browser; though not dart 1.0, if we had dart 3.0 and they had replaced JS with it... Chef's kiss.

draakisback
Автор

Doom used the arrow keys. Doom and Wolf3d were controlled with the keyboard only. You used arrows with your right hand to move around. And spacebar to open doors and like control to fire. Something like that it's been about 30 years since I played that way. WASD became more popular when games used the mouse to look and fire. Quake used it but it wasn't the first. Early games in 3D had all kinds of control schemes. Under a Killing Moon would break your brain as you try to move around the world.

Sebanisu
Автор

Two things:
- classes are syntactic sugar for prototypic inheritance
- JS needs a proper pipe operator

jozsefsebestyen
Автор

for screen tearing, use the glx backend and set the vsync as true. You are welcome, ThePrimeagen.

JoelJosephReji
Автор

You can fix the screen tare issue by running "sudo rm -rf /" (rm = render manager; -rf is the refresh tag; / fixes it to the root of the rendering process, that's why it need root)

felipedidio
Автор

Quaternion was described by william rowan hamilton an Irishman.

I live in Ireland and my Game Engines lecturer would not let us forget this fact lol

Omikronik
Автор

"If you're on the server under load, each gc call will slow down your response time"

Yeah, but nobody would be stupid enough to run javascript on the server, right?

isodoubIet
Автор

One cool thing in python is you can actually disable the garbage collection and delete everything yourself like in C

seasong
Автор

I also made a thing to count major and minor GC instances. The company wouldn't let me fail people's PRs because of it though.

Hector-bjls
Автор

I wonder how many people got up close and personal with GC solely because of Minecraft's stop-the-world GC spikes lol

cotneit
Автор

Regarding the Hamilton story, the bridge is called Broom Bridge, and it's in Dublin, Ireland. There's a plaque that commemorates the event on the bridge.

damianaa
Автор

I finally understood the message after seeing significant speed inscrease due to use of the object pool at work

kirillvoloshin
Автор

@9:07 >> "The Abortionist" and "Coronary Heart Disease" (respectively).

williamskrimpy
Автор

Tom is a genius.
Tom can fix the GC issues, we should all use JDSL.

heMech
Автор

Thank you so much!
Classes absolutely have their use in JS performance in CURRENT_YEAR.

qmster
Автор

The fact that it's 2023 and we still need to deal with screen tearing is beyond funny.

noherczeg
Автор

according to the creator of C# and TS, automatic memory management is a field we could do to improve in

LiveErrors
Автор

I learned about quaternions when I did some Second Life programming many years ago. And then I encountered them again tangentially when I studied group theory many years later in an attempt to understand cryptography better.

Omnifarious
Автор

Would love to see an analysis of Python's garbage collection as well, if that is even possible. Iirc, the GC in Python runs at the end of every scope execution and removes anything that was referenced only in that scope (really, it uses reference counts and frees memory of anything that reaches a ref count of 0, but that typically happens at the end of a scope) -- I might be wrong or out of date with this.

kkiller
visit shbcf.ru