Is Moore's Law Finally Dead?

preview_player
Показать описание


Correction to what I say at 04:07 That should have been ebeam screening, not ebeam lithography. Sorry about that.

In the past 10 years or so, tech specialists have repeatedly voiced concerns that the progress of computing power will soon hit the wall. Miniaturisation has physical limits, and then what? Have we reached these limits? Is Moore’s law dead? That’s what we’ll talk about today.

🔗 Join this channel to get access to perks ➜

00:00 Intro
00:53 Moore’s Law And Its Demise
06:23 Current Strategies
13:14 New Materials
15:50 New Hardware
18:58 Summary
19:31 Special Offer for NordVPN

#science #technology #mooreslaw
Рекомендации по теме
Комментарии
Автор

A few comments from a chip designer.
1) Regarding the transistor size limit, we are pretty close to the absolute physical limit. Although the minimum gate length equivalent figure (the X nm name that is used to name the process node) only refers to one dimension (and even that's not quite that simple), we are talking dimensions now in the high single digits, or low double digits of atoms.
2) Regarding electron tunneling, this is already quite common in all the current modern process nodes. This shows up as a consistent amount of background leakage current, however as long as it's balance (which it typically is) then it doesn't cause logical error per-se. However, it does increase the amount of energy that is just being turned into heat instead of performing any useful processing, so it does slightly cut into the power savings of going to a smaller node.
3) One of the biggest things impacting Moore's law in the context of the transistors/chip interpretation is manufacturing precision, crystal defects, and other manufacturing defects. Silicon wafers (and others as well) have random defects on the surface. Typically when a design is arrayed up on the surface, during later wafer test a handful of the chips will not turn on due to these defects, and thus are discarded. The ratio of good transistors to total transistors is referred to as the wafer yield. As long as the chips are small, then a single defect may only impact yield a little bit because the overall wafer has hundreds or thousands of possible chips, and only a few hundred defects that could kill a chip. But as chips get larger, yield tends to go down because there are fewer chips per wafer. There are some techniques like being able to turn off part of a chip (this is how you got those 3 core AMD chips for example), but ultimately as chips get larger, the yield goes down, and thus they get more expensive to manufacture.
4) As discussed in this video, what people really care about isn't transistor density, or even transistors per package. Rather, they care about computing performance for common tasks, and in particular for tasks that take a long time. By creating custom ASIC parts for new tasks that are compute intensive (ML cores, dedicated stream processors, etc), the performance can be increased so that the equivalent compute capability is many times better. This is one of the areas of improvement that has helped a lot with performance, indeed even outpacing process improvement. For example, dedicated SIMD cores, GPUs with lots of parallel stream processors, voice and audio codec coprocessors, and so on.

Anyway, overall a great video on the topic as always!

funtechu
Автор

An interesting take on Moore's law that I heard from Marcel Pelgrom (a famous name in the semiconductor world) during one of his talks was that, for a lot of fields in electronics, there was little 'true' research in novel techniques until Moore's law started failing us. Up to that point, the solution to everything was 'you have more transistors now so just throw more transistors and complexity at the problem'. By the time you came up with a new novel technique to improve performance with the old process, people who just added more transistors to the old technique in the newer process nodes had caught up already or surpassed your performance gains. You see this now where the calibration processor in some cheap opamp might technically be more 'powerfull' than the entire computer network that put men on the moon - just to compensate for the crappy performance of the opamp - even a 'dumb' automotive temperature sensor in a car might have more area dedicated to digital post-processing to improve lifetime, stability and resolution.
It is now that we no longer get these new gains from Moore's law (both because scaling decreases but also because it is just getting so, so, so expensive to do anything that isn't leading edge CPUs and GPUs and FPGAs in these nodes) that people are going back to the drawing board and coming up with really cool stuff to get more out of the same process node.

I can partially confirm this from my own experience in research on high-speed radio circuits. For a long time, they just used the smaller devices to get more performance (performance here being higher data rate and higher carrier frequencies). This went on for decades, up to the point that we hit 40 nm or 28 nm CMOS, and this improvement just... stopped. In the last 10 years, it is just an ongoing debate between 22 nm, 28 nm, and 40 nm on what is the best. But still, using the same technology node as researchers 10 years ago, we more than 50x the data rate we achieve, and do so at double the carrier frequencies, simply by using new techniques and better understanding of what is going on at the transistor level.

JorenVaes
Автор

I am old enough to remember the transition from vacuum tubes to transistors. At one time, transistors were large enough to see with the unaided eye and were even sold individually in local electronics shops. It has been very interesting to watch the increasing complexity of all things electronics ...

GraemePayneMarine
Автор

Such a holistic approach to answering the question. Going down so many avenues and to show us how nuanced the subject is.
This was awesome 😀

Broockle
Автор

My first computer was a Commodore VC-20, with 3 KB of RAM for BASIC. Taught myself programming at age 11 on that thing... good old times.

markus
Автор

I opened YouTube and the first video in my feed was “How Dead is Moore’s Law?”. Thank you YouTube algorithm 🙌

djvelocity
Автор

Hey guys, I know nothing about the market and I'm looking to invest, any help? As well who can I reach out to?

Annpatricksy
Автор

I can’t imagine how much time you put into researching your videos. Thank you.

FloatingOnAZephyr
Автор

"The trouble is the production of today's most advanced logical devices requires a whopping 600-100 steps. A level of complexity that will soon rival that of getting our travel reimbursements past university admin". I feel you Sabine <3

viktorkewenig
Автор

My first computer was a monstrous IBM mainframe. Punch cards and all. Ahhh... those were days, which I did not miss.
Things that would take hours then, I can do in a few seconds today.

johneagle
Автор

The Amiga computer from the early 1980s had a half dozen specialized chips. The memory-to-memory DMA chip (the "blitter"), a variety of I/O chips, something akin to a simple GPU, separate audio chips, etc. It was very common back when CPUs were 7MHz instead of 7000MHz.

There's also stuff like the Mill Computer, which is not machine-code compatible with x86 so hasn't really taken off. It's simulated to be way faster for way less electricity than modern chips.

One of the advantages of photonic computing is also that the photons don't interact, so it's easier to route photons without wires because they can cross each other without interfering.

darrennew
Автор

Great video Sabine! I recently finished working at lab who are trying to use similar materials to graphene to try and replace and supersede silicon.

One tiny error about 4 mins in was you described e-beam lithography as a technique for characterising these devices. What you go on to describe is scanning electron microscopy. E-beam lithography is a device fabrication technique mostly used in research labs. Confusingly the process takes place inside an SEM! Lithography is patterning of devices, not a characterisation technique.

rammerstheman
Автор

I was convinced that analog computing would be mentioned in such a comprehensive and detailed overview of computing. If you need to calculate a lot of products as you do when training an AI you can actually gain a lot by sacrificing some accuracy by using analog computing.

maxfriis
Автор

I joined Intel in 2002. At that time people were predicting the end of Moore's Law.
My first "hands on" computer was a PDP-8S, thrown away by the High Energy Physics group at the Cavendish and rescued by me for the Metal Physics group. From this you can correctly infer the relative funding of the two groups.

adrianstephens
Автор

Love this one. Your [Sabine's] compairson between the complexity of designing chips and getting reimbursment paperwork through an associated bureaucracy is sooo excellent. :)

MyrLin
Автор

Sabine, maybe you could do a video on the reverse of Moore's law as it applies to efficiency of software, especially OSes, which get slower and slower every generation so that for the past 20 years, the real speed of many tasks in the latest OS on the latest hardware has basically stayed the same, whilst the hardware has become many times faster.

adriendecroy
Автор

When many people use "Moore's Law" as a shorthand that compute will get faster over time, rather than a doubling of transistor density. Wrong though that is, it's a common usage, especially in the media. The speed increases we get from the doubling alone have stagnated, which is why cpu clock speeds are also stagnant.
Nowadays, the extra transistors get us multiple cores (sadly most compute problems don't parallelize neatly) and other structures (cache, branch prediction, etc) that aren't as beneficial as the raw speed increases we used to get from the doubling.

AAjax
Автор

Well summarised, Sabine!
By coincidence, I'm about rhe same age as the semiconductor industry, and have enjoyed working in that industry (in various support roles) my entire life. I've been blessed to have worked with many bright, creative people in that time. In the end, I hope that I've been able to contribute too.
Kids! Spend the time to learn your math, sciences, and management skills, and then you too can join this great quest/campaign/endevor.

jehl
Автор

This is a field I've worked in and it's interesting to hear you cover it. I definitely think that heterogenous computing is going to be what keeps technology advancing beyond Moore's law.

The biggest problem I think is that silicon design has typically had an extremely high cost of entry for designers. Open Source ASIC design packages like the Skywater 130nm PDK enables grass roots silicon innovation. Once silicon has the same open source love as software does, our machines will be very different

ZenoTasedro
Автор

There is another set of problems that are related but are a lot less obvious.

Design complexity is a problem itself. It is difficult to design large devices and get them correct. In order to accomplish this, large blocks of the device are essentially predesigned and just reused. This makes is simpler to design but less efficient. As a complement to this is testing and testability. By testing, I mean the simulations that are run before the design is finalized prior to design freeze. The bigger the device the more complex this task is. It is also one of the reasons that big blocks are just inherited. Testability mean ability to test and show correctness of production. This too gets more complex as devices get bigger.

These complexities have led to fewer new devices being made as one has to be very sure that the investment in developing a new chip will pay off. The design and preparation for manufacturing costs are sunk well ahead of the revenue for the device. And these costs have also skyrocketed. The response to this has been the rise of the FPGA and similar ways of customizing designs built on standard silicon. These too are not as efficient as one could have made them with a fully custom device. They have the advantage of dramatically lowering the upfront costs.

jimsackmanbusinesscoaching