Moore’s Law is So Back.

preview_player
Показать описание

According to Moore’s law, the number of transistors on a microchip should double every year. Two years ago, Nvidia CEO Jensen Huang said that the law was “dead” because we're hitting physical limits for the miniaturization of transistors. Now, though, he’s reversed that claim, instead predicting that we're about to see a “Hyper” Moore’s law. Let’s take a look.

🔗 Join this channel to get access to perks ➜

#science #sciencenews #tech #technews
Рекомендации по теме
Комментарии
Автор

I think he means to more than double prices every two years.

Rafael-rnhn
Автор

I think he uses the term "Moore's law" for simplicity. It's a term that people know. Actually, he is talking about performance, not transistor count.

henriksundt
Автор

Jensen wasn't really talking about the performance of NVIDIA chips. He was talking about NVIDIA's stock price and market capitalization...

ats
Автор

This is the endstage of the microchip S curve. We don't get a doubling of transistors every 1.5 years, so now we're co optimizing HW/SW and designing custom ASICs for applications, which can get you another 10X to 100X at great costs. Then you hit the wall. This was all predictable decades ago. Calling it hyper moores law is some ironic marketing talk, given that its actually the end of chip improvement.

tristan
Автор

I make a great shredded cabbage coleslaw, I call it Moore's Slaw, because people always want more, I double the batch every time

ianglenn
Автор

It’s almost like he co-opts scientific concepts to drive investor interest or something.

williamable
Автор

When you can "simply" double the performance by doubling the power consumption (either by doubling the physical size, or running existing architecture more aggressively), it means you need to stop looking at "performance" in isolation -- e.g. only worth paying attention to performance per dollar, per watt, per transistor, per die size....

Proton_Decay
Автор

I am so bitterly disappointed. I expected Nividia to triple performance monthly. Now this mediocre progress.

carlbrenninkmeijer
Автор

You should give your graphics team major props for coming up with all those funny memes in your videos. I love the humor! 🙂

ispamforfood
Автор

Moore's Law basically told us we would have to change direction eventually.

drfirechief
Автор

I don't trust anyone that says "The more you buy the more you save!"

vogue
Автор

So one correction 5:13 . I am pretty sure NPUs are optimized for RUNNING neural networks more efficiently power and latency wise rather than training them. TPUs (Tensor Processing Units) on or off GPUs are what these companies are investing in for TRAINING.

arlo.infinity
Автор

what we have is jensens law: profit margin of mm² of silicon sold by nvidia doubles every 18 months.

anttikangasvieri
Автор

Technically and pedantically, "Moore's Law" refers to transistor count on a chip doubling every couple of years. Colloquially, it has come to mean a more general "computer go faster" attribute. The physical limitations (like the size of atoms and the speed of light) were inevitable, and so simply relying on transistor density increasing to make "computer go faster" had to be replaced by various other methods, like multiple CPUs etc. Apple's SoC (System on a Chip) with their "Silicon" M-series since 2020 is another way, getting all those separate components that were physically separated by a zillion miles (metaphorically) together on one tiny chip, saves on a ton of power consumption (which means it can run cooler, and therefore at a higher clock speed), AND faster again because the "information" doesn't take as long to travel between what used to be physically separate sections of the computer.

patientzerobeat
Автор

Moore's law is more of a marketing thing, as nodes are also marketing items. They had some truth until the late 2000s, but not anymore.

MTd
Автор

In the ‘90s I worked alongside a group at Royal Dutch Shell charged with developing a processing library for seismic data—huge volumes of measurements used to generate images of acoustic impedance contrasts in the subsurface, for oil and gas exploration. At one point they held a conference with the vendors of the state-of-the-art computing hardware they used, which had developed the habit of overheating to the point of failure. Turned out that Shell’s relentless pursuit of optimization for throughput had produced code that drove the hardware at full power all the time. “Nobody ever ran such efficient code before”, was the vendor’s excuse. It led to a joint development for hardware and software. Never underestimate the relentlessness of the Dutch. 😂

ronmasters
Автор

0:34 Are leather jackets required attire for working in NVidia these days?

StrayCatInTheStreets
Автор

Asianometry has a recent and excellent video about Dennard Scaling (the shrinking of transistor structures and voltages) and how running against physical limits forced CPU designers into adopting parallelism.

cbuchner
Автор

If I didn't misunderstand his speech where he was selling his stuff was that with AI he could analyze a domain and produce algorithms that are magnitudes faster. Some people seem to think that at best this would be a one time improvement. Others think that their domains are already optimized and he's just speaking marketing nonsense so he can sell some more of hist stock while it is hyper inflated.

CheapHomeTech
Автор

Moore's law is misleading, It counted the density of transistors. But if you look at actual performance, like operation per dollar, it follows the known and trusted diminishing return law. In general, technology improves following a shape like the sigmoid function.

MrAlanCristhian