This chip will accelerate AI compute way past Moore's Law

preview_player
Показать описание
My friends Guillaume Verdon and Trevor McCourt are launching their new startup Extropic today, so I thought I'd have them on my channel to talk about their radical new ideas for a new kind of analog chip. I'm a small personal angel investor in Extropic and you may know Guillaume better on Twitter as Beff Jezos.

I'm Garry Tan, President & CEO at Y Combinator. I was an engineer, designer and product manager who turned into a founder and investor, and now I want to help you in your journey to build technology that changes the world. These videos are about helping people build world-class teams and startups that touch a billion people.

Please like this video and subscribe to my channel if you want to see more videos like this!

Follow me on Twitter and Instagram so you'll never miss my videos and ideas—
Рекомендации по теме
Комментарии
Автор

If you didn't get it, here's the video explained at a highschool level.

The video features an interview with Gom, Verun, and Trevor from EX Tropic, who have developed a new type of computer chip that operates differently from traditional chips. They start by explaining that the semiconductor industry and AI development are facing a challenge known as "Moore's Wall, " which is a limit to how small transistors in chips can be made. This is a problem because AI demands are increasing, requiring more power and larger models.

Moore's Law, which has driven innovation in technology by making transistors smaller and more efficient, is coming to an end because transistors are reaching a size where they can't function reliably due to thermal fluctuations at the atomic level.

The team at EX Tropic proposes a solution by embracing the stochastic (random) nature of physics at small scales. Instead of fighting the randomness, they want to use it to their advantage in AI algorithms. Traditional AI algorithms add randomness artificially, but their idea is to use the natural randomness of electrons in their chip design.

They've created a chip that uses superconductors, which are materials with no electrical resistance, making them highly efficient. These chips are designed to be stochastic and analog, meaning they can handle randomness and continuous values, unlike digital chips that work with discrete values (0s and 1s).

Their chips are capable of accelerating sampling, a process important in probabilistic models used in AI. Sampling on traditional digital computers is energy-intensive and slow because it requires complex circuits to generate pseudo-randomness. EX Tropic's chips, however, use the natural randomness in the movement of electrons to perform sampling more efficiently.

The team believes that their approach, which they call "physics-based computing, " can lead to significant advancements in AI by allowing for more complex models to be run more efficiently. They hope that their launch will attract talented individuals in machine learning and hardware development to join them in scaling this new technology.

They also discuss the broader implications of their work, suggesting that if current AI development continues on its current path with traditional hardware, it will face significant bottlenecks. They believe their approach can help overcome these challenges by going back to the physical principles of semiconductors and exploring new ways to harness their potential.

stripstick
Автор

I'll check in again once an AI can re-explain this shit to my stupid ass.

beboshi
Автор

Whoa. I get this and it's a really interesting idea! The co-founder explained it in a very technical way, but what he's saying is rather than fighting physics to make it more predictable, we can instead use the unpredictability for workloads like AI where that's a feature, not a bug.

dhariri
Автор

"Programmable sources of randomness based on analog stochastic circuits"... Genius!

thesimplicitylifestyle
Автор

was listening to their Twitter Spaces nearly every night before "e/acc" was started. love what they're doing

PigeonPost-ts
Автор

4:10 I want to believe Garry is as lost as I am

seunoyebode
Автор

I asked ChatGPT to explain this video for a 5 year old. You're welcome lol.

Here's a simplified explanation of the transcript for a 5-year-old:

Some smart people, Gom, Verun, and Trevor, have made a very special tiny computer chip. This chip is different from other chips because it works in a new way. Usually, computer chips try to do things perfectly without any mistakes. But this new chip is okay with making some mistakes because it helps it think faster, like a brain.

These smart people noticed that the old way of making chips is getting harder because the chips are already super tiny. So they thought, why not make a chip that works like our brain, which is a little bit messy but still very smart?

Their chip is made of a special material called superconductors, which helps it work really well without using a lot of energy. This is important because we want computers to be smart but not use too much power.

They hope that their new chip will help make computers even smarter and faster, and they want other smart people to join them in making this happen. They believe that if they can make computers work more like our brains, we can do amazing things in the future!

stripstick
Автор

Never realized there was a second bell curve to the right of first one... and that apparently I'm at the wrong tail end of that one 😅

speedmariner
Автор

I mean, this is a huge bet on stochastic AI models still being the norm a decade from now, even though governments and the academia will keep pressing for xAI in order to make model owners liable. In any case, even if the proposed initial use case end up not working, this is an idea worth exploring. Best of luck to the founders.

rubncarmona
Автор

i was researching analog inference for a project and liked two papers. First, ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars, and second titled On the accuracy of analog neural network inference accelerators. Both available free on web.

Ben-vszr
Автор

How does stochastic physics enhance the efficiency of AI models compared to traditional computing methods?

AdvantestInc
Автор

So it’s about somehow harnessing the randomness of inherent thermal fluctuations within a physical piece of a circuit, in place of artificially producing randomness via the actual circuits logic gates.

Is this more energy efficient randomness generation that significant? Do these chips completely supplant GPUs when it comes to AI training?

Elliot_
Автор

I like the concept of analog stochastic circuitry tailored to a trained model, assuming that probabilistic and random behavior is intrinsic and important to certain types of neural network. But we may want to master deterministic neural networks first. The major promise of the 1.8 bit neural network over the 16 or 32-bit float-based network is that we can embody a trained neural network directly in something like a common FPGA and get far more speed and efficiency than we get by running models within a simulated network. The trouble with non-deterministic networks built stochastically is that they are fundamentally unpredictable, making them more difficult to trust and debug than a network that always produces the same output for a given set of inputs and constraints. On the other hand, if the idea is to have a trainable “black box” leading towards a general intelligence without concern for its accuracy and perfection, a stochastic analog circuit may provide a useful basis. Even so, it can and should still be modeled, trained, and tested in simulation before committing to a new hardware paradigm.

ScottLahteine
Автор

The green shirt dude lowkey looks like Andy hertzfeld from the movies Steve Jobs

eshanghose
Автор

note to the editor: The video is 21;9 but the upload is 16;9. it has 4 blackbars

cem_kaya
Автор

Sweet, was looking forward to some progress in analog chips for AI and Guillaume Verdon left a good impression in the Lex Fridman podcast. Hope you will have a lot of success with your approach!

the_curious
Автор

Sounds super interesting, these guys are changing the way how hardware is produced, especially soon Moore's law will be obsolete.

george_davituri
Автор

Heating up my whole office while running a polynomial logistic regression is coming to an end!

michelsontheth
Автор

Have always envisioned that one day we will be able to run some of this large models that solve real-world problems? I worked on a speech recognition system and I wanted to have this model run on mobile with full weight precision offline guaranteing high accuracy. Slowly this coming to life with Mojo and you guys. Keep up the spirit💪💪I'm a Machine Learning & Backend Engineer; building solutions around AI

ronnieleon
Автор

So Hardware Random Number Generator (HRNG)? Pretty sure that's already a thing, but maybe they have niche application for it?

GilesBathgate