Tesla Reveals The New DOJO Supercomputer!

preview_player
Показать описание
Tesla Reveals The New DOJO Supercomputer!

Last video: Tesla Reveals The New DOJO Supercomputer!

Welcome to the Tesla Space, where we share the latest news, rumors, and insights into all things Tesla, Space X, Elon Musk, and the future! We'll be showing you all of the new details around the Tesla Model 3 2023, Tesla Model Y 2023, along with the Tesla Cybertruck when it finally arrives, it's already ordered!

#Tesla #TheTeslaSpace #Elon
Рекомендации по теме
Комментарии
Автор

🎯 Key Takeaways for quick navigation:

00:00 🖥️ Tesla's AI Division has created a supercomputer called Dojo, already operational and growing in power rapidly, set to become a top 5 supercomputer by early 2024.
01:25 💹 Dojo's computing power forecasted to reach over 30 exaflops by Feb 2024, with plans to ramp up to 100 exaflops by Oct 2024.
03:02 💰 Tesla's Dojo, a specialized AI training cluster, equates to a $3 billion supercomputer, offering remarkable AI model training capabilities.
04:00 🚗Dojo focuses on training Tesla's full self-driving neural network, surpassing standard supercomputer definitions for specialized AI training.
05:38 📸 Dojo processes immense amounts of visual data for AI model training through labeling, aiming to automate a task previously done by humans.
07:01 🧠 Dojo adopts a unique "system on a chip" architecture, like Apple's M1, optimizing efficiency and minimizing power and cooling requirements.
08:10 💼 Dojo operates on tile levels, fusing multiple chips to create unified systems, enhancing efficiency and power in AI training.
10:00 ⚙️ Tesla can add computing power through Dojo at a lower cost, avoiding competition for industry-standard GPUs, potentially leading to a new business model.
11:23 🌐 Future versions of Dojo could be used for general-purpose AI training, enabling Tesla to rent out computing power as a lucrative business model.
12:45 🔄 Renting out excess computing power from Dojo can potentially revolutionize Tesla's profitability, similar to Amazon Web Services.

Made with HARPA AI

Rafsways
Автор

In an unprecedented move, Dojo changed its name to Skynet.

maxwellhouse
Автор

The fundamental unit of the Dojo supercomputer is the D1 chip, [21] designed by a team at Tesla led by ex-AMD CPU designer Ganesh Venkataramanan, including Emil Talpes, Debjit Das Sarma, Douglas Williams, Bill Chang, and Rajiv Kurian.[5]

The D1 chip is manufactured by the Taiwan Semiconductor Manufacturing Company (TSMC) using 7 nanometer (nm) semiconductor nodes, has 50 billion transistors and a large die size of 645 mm2 (1.0 square inch).[22]

As an update at Artificial Intelligence (AI) Day in 2022, Tesla announced that Dojo would scale by deploying multiple ExaPODs, in which there would be:[20]

354 computing cores per D1 chip
25 D1 chips per Training Tile (8, 850 cores)
6 Training Tiles per System Tray (53, 100 cores, along with host interface hardware)
2 System Trays per Cabinet (106, 200 cores, 300 D1 chips)
10 Cabinets per ExaPOD (1, 062, 000 cores, 3, 000 D1 chips)

Tesla Dojo architecture overview
According to Venkataramanan, Tesla's senior director of Autopilot hardware, Dojo will have more than an exaflop (a million teraflops) of computing power.[23] For comparison, according to Nvidia, in August 2021, the (pre-Dojo) Tesla AI-training center used 720 nodes, each with eight Nvidia A100 Tensor Core GPUs for 5, 760 GPUs in total, providing up to 1.8 exaflops of performance.[24] credit: wiki

terryterry
Автор

It’s crazy to see how far ahead Tesla is in the auto industry

MrO
Автор

Can you Imagine, hundreds of thousands of teslas are feeding data to this machine every day

Star_Dust___
Автор

Nice pace, good graphics, not too "fanboy", plenty of terminology, and raised a few questions I need to go look up and think about. All around effective YouTube. Well done.

intheshellify
Автор

Well brought presentation with understable analogies!

EdwinAbalain
Автор

I more than liked this video. It was a wealth of information in less than 15 minutes. 🙂

robertb
Автор

If every vehicle on public streets had a "gps" transmitter giving out data like direction, speed, etc. FSD could take advantage incorporating this localized data (car to car) to help determine its next action.
A Futurama episode when the gang went to Robot Planet the robots move like vehicle traffic but fit between each other at high speeds. Perfect trafgic management.

Sammasambuddha
Автор

Hey, i was the first viewer and first lile. Lol. Grewt video. This is a game changer. One more money maker for Tesla. Here comes FSD and Optimus.

robwashere
Автор

9:10 These are the most powerful computers that you can buy! 🤣😂🤣
Oh please! What a delirious Apple fanboy statement.

I love your content, though! 😁

eliasb
Автор

A 'flop' is a floating point operation which is more complicated than a mere computer instruction.

stevemccrea
Автор

This is the beginning of true FSD, and will be an epic win if tesla plays their cards correctly.

Ivdde
Автор

What’s amazing is that the auto industry is just the beginning. This will be the foundation of advances in gaming, MMO-VR, physics research, simulations, and more.

MeatMechArchitect
Автор

@1:55 Too funny. An Exoflop is a 1 with 18 zeros behind it.... and the video shows 15 zeros. A lot of good info here on Dojo... thanks for the update.

rosslawrence
Автор

Well brought presentation with understable analogies!. Thank you for your hard work .

JulissaLucas-fw
Автор

Actually, the semiconductor trend for the past few years is moving away from single chip SoC designs to multi-chip packages, which means the SoC is not on a single piece of silicon, but multiple pieces of silicon inside a single “cpu” package. This is what is used in the M1, the chips in the iPhone, and inside AMD and Intel’s latest cutting edge CPUa etc. Multiple chiplets are placed very close to each other, even stacked one on top of the other inside a “cpu package, ” but the SoC is no longer a single piece of silicon in cutting edge products.

The reason this is happening is, of course, economics. The different chips are produced in the process nodes that are most economical. So the I/O hub in an AMD cpu is in one process while the cpu clusters are on cutting edge processes in units of 8 or 16 cores per cluster. Then the cpu package has one or more of these separate cluster chiplets placed around an I/O hub chiplet in the AMD example. In an Apple products, the A-series and M1 cpus, separate pieces of silicon for CPU and for memory are stacked inside the CPU package. This is why your M-series computers system memory can’t be upgraded.

paulm
Автор

At 2:11 your big number is missing three more zeros! That number is only 1 quadrillion.

TheOlvan
Автор

Keep up the great work, Elon & Tesla Team.💯💯

Ready to see the luxury Tesla RVs also, Boss.😉😉

IngeniousDimensions
Автор

I take it he is Mac man. In the old days we called this 'cascading', and we had 27 iMacs connected. No one ever talks about the software needed to use this configuration.
This sounds impressive, but the Hardware is far beyond available Software to run them. They still don't have much to do.
Back then we thought 10 gigaflops was incredible. Working on these things is what I used to do and explains why I garden now.

GEOsustainable