JavaScript performance is weird... Write scientifically faster code with benchmarking

preview_player
Показать описание
Learn how to benchmark your JavaScript code in Deno and find out how the way you write code affects performance. Why is a traditional for loop faster than forEach? And is premature optimization the root of all evil?

Рекомендации по теме
Комментарии
Автор

0:24 "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

allesarfint
Автор

The benchmarks should really also include memory allocations. array map/filter create new a new copy of the array every time so even if they look decent on isolated benchmarks they put load on the GC which may introduce lag spikes if you spam them too much.

pokefreak
Автор

7:52 "you're good enough even when you're not at your best" love this statement, very deep

eliasepg
Автор

Shout out to those developers, like me, who write gibberish and it works!

AbdelhakOussaid
Автор

The most important thing to measure is the performance of your application, not your modules or functions. Measure those once you’ve detected a performance issue with your app, and need to drill down as to why and fix the issue. A slow sum or sort is usually pointless to optimize if its impact on your app is negligible.

kasper_
Автор

The introduction was the most interesting part of this video.

johnvomberg
Автор

can't unsee NODE transitioning to DONE 😭

cosmiclattemusic
Автор

This is called "micro benchmarking". Problem is - they tell you how fast your module runs in isolation, but not as part of the whole.

Example: you have an algorithm that just fits into your CPU's L1 cache, it runs pretty fast, but if you include it into a larger program, suddenly it's very slow because it's not in cache - something else is.

I've had many cases where I would benchmark something to perfection, only to find out that in production performance is opposite of what benchmark says.

I think microbenchmarking is useful, and it's fun, and it helps you build some confidence about your assumptions. But it's very rarely reliable in the context of a larger application.

AlexGoldring
Автор

Quicksort is not a stable sort. With a stable sort routine equal items will have the same relative order compared to the original. Quicksort does not have this feature.
Most of the time you might not care, but that IS a difference worth noting.

erikjohnson
Автор

Learning during an 8 minute ad for a Deno course, love it!

Bayzon
Автор

8:02 Yes, yes, Using something like Deno bench doesn't tell you where bottlenecks in your program are! You need to profile your actual code in-situ. Then if you prove there's a bottleneck somewhere and it is affecting the performance of your program in a meaningful way then you can use bench to try and find faster alternatives, but then you need to test it again in-situ to make sure it actually performs better. You might have misunderstood the shape of the data, or the shape of the traffic to your code and find that your speed up didn't actually change much. You can't always test these things accurately synthetically, and you won't know it until you try it with the real thing :)

zeikjt
Автор

This Deno commerical brought to you by Deno.

Vracious
Автор

You can also accomplish this using a while loop which should be the fastest way.

let arr = [1, 2, 3, 4, 5];
let sum = 0;

while( arr[0] != undefined ){
sum += arr.shift();
}

masterserge
Автор

We've been able to squeeze out more by declaring i before and not using a lookup for the length. I know it sounds terrible, but it's ever so slightly faster.

let i = 0;
const len = arr.length

for (; i < len; i++)

Use at your own peril.

jakeave
Автор

You can actually make the for loop even faster by "caching" the length like this:

for (let j = 0, len = testArray.length; j < len; j++) {
// loop body
}

But, it would only make sense for very large arrays. Props for the video and course.

DeaconuDanAndrei
Автор

Technically, there is a fifth way to loop: `for (let i=arr.length-1; i>=0; i--)` or `for (let i=0, l=arr.length; i<l; i++)` to avoid recalculating the array length on every iteration, though I guess it depends on the runtime whether or not this is any faster.

kkiller
Автор

The set example is not really accurate. Although I agree that it definitely provides performance benefits over Array.includes() in case of large data but while benchmarking we must also include the one time overhead taken to create the set as we are originally dealing with array data type.

sahilaggarwal
Автор

"Languages that try to disallow idiocy become themselves idiotic."

-- Rob Pike

noirsociety
Автор

5:20 no fair, your benchmark doesnt account for the compute power to create the Set in the first place. You're not starting from the same place so it's not an accurate comparison

rLgxTbQ
Автор

As pertains to the first example, repeat after me children : I shall NOT microbenchmark things where the function call overhead overshadows the work being done by an order of magnitude or more. Seriously, this is such a common thing it should be a meme for all intents and purposes. Now, show us a run where the work being done doesn't amount to "basically nothing at all", ie, something a tiny bit realistic...

ErazerPT