The Real Reason Tesla Built The DOJO Supercomputer!

preview_player
Показать описание
access for 1 week to try it out. You'll also get 25% off if you want the full membership!

The Real Reason Tesla Built The DOJO Supercomputer! Detailing Tesla's Supercomputer project Dojo, how it works, chip specs, and why Tesla is investing so much into AI.

Last video: The 2022 Tesla Battery Update Is Here

►You can use my referral link to get 1,500 free Supercharger km on a new Tesla:

🚘 Tesla 🚀 SpaceX 👽 Elon Musk

Welcome to the Tesla Space, where we share the latest news, rumors, and insights into all things Tesla, Space X, Elon Musk, and the future! We'll be showing you all of the new details around the Tesla Model 3 2021, Tesla Model Y 2021, along with the Tesla Cybertruck when it finally arrives, it's already ordered!

You can use my referral link to get 1,500 free Supercharger km on a new Tesla:

#Tesla #TheTeslaSpace #dojo
Рекомендации по теме
Комментарии
Автор

access for 1 week to try it out. You'll also get 25% off if you want the full membership!

TheTeslaSpace
Автор

Im an experimental physics phd student and I've written a quantum mechanics simulator that runs on graphics cards. When i was writing that, the top priority was to retain the highest accuracy possible with 64 bit floating point numbers (since we want to know exactly what's going to happen when we test the experiment out in the lab). I think most supercomputers are built to do things like that. However, having that accuracy is unnecessary for things like graphics and machine learning. So it makes perfect sense that tesla would cut down on that when they're designing a supercomputer only for machine learning purposes. I don't think you got anything wrong.

nujuat
Автор

Software engineer here with 15 years experience. You did a good job

StephenRayner
Автор

Very smart approach. Less precise data and more neural networks. Simply said, it's not important if a pedestrian is distanced 15.1 m or 15.1256335980...m, important is if he going to step on the road or not. For decision making precise data is not necessary, interpreting and understanding data is crucial. The second factor why low precision is acceptable all predictions are made for a short time span, and calculations are done repeatedly. The third reason is sensors inputs are also relatively low quality but huge amount.

edit: very good and understandable video.

denismilic
Автор

40+ years of software engineering starting with machine interfaces. Very good presentation. If I was at the start of my career this is where I would want to spend my waking hours.

anthonykeller
Автор

This is probably another example of a philosophy most often seen working at SpaceX: The best part is no part. I would probably call the dojo a super-abacus. But for their purpose, an abacus was perfect, so they built the correct machine.

TheMrCougarful
Автор

Great video! I spent my entire 45-year career in High Performance Computing specializing in the performance of data storage systems at the various DoE and DoD labs. I am very impressed with Dojo, it's design and implementation not to mention its purpose. Truly amazing and fascinating!
Tuesday Humor: Frontera: The only computer that can give you one hexabazillion wrong answers per second! 😈

thomasruwart
Автор

Its worth mentioning that Tesla is already planning out the next upgraded version of DOJO which will be 10 x the performance of the one they are building today.
Dojo will be up and running sometime in the second half of 2022, after that I give it 1 year to turn Full Self driving into something the world has never seen. It will take all 8 cameras video and simultaneously label everything they see in real time through time.
Today its labeling small clips from individual cameras. This will be a HUGE step change in training once its running.
Its going to save millions of lives.

jaybyrdcybertruck
Автор

The amount of love I felt for this community when you compared it to Goku and Frieza made me feel like there might be somewhere on this planet where I might fit in and that I am not as alone as I feel. Thank you TESLA and ELON and NARRATOR GUY.

neuralearth
Автор

fun fact, the computers Tesla has been using to train FSD software today amount to being the 5th largest super computer in the world. It isnt good enough at that level so they are leap frogging everything.

jaybyrdcybertruck
Автор

Thanks for a very clear explanation of task-specific computing machine design. I've read ... well, skimmed ... er, sampled that DOJO white paper to the point where I glommed the idea of lower but sufficient precision yields higher throughput thus compute power for a specific task. Your pi example was the best! Keep these videos coming, cheers.

stevedowler
Автор

Great presentation! It filled in the gaps & I learnt some things I wasn't even aware of. Even your comment section is filled with great info!!

MichaelAlvanos
Автор

Great video! It was never about the EV’s for me, but instead more about the Ai and energy storage. TSLA

j.manuelrios
Автор

When you start the video with a long (sometimes angry/defensive) tirade about you not knowing anything about supercomputers, it makes me wonder if any of it is going to be worth listening to. You actually did a pretty good job, once you got to it.

incognitotorpedo
Автор

I, for one, welcome our computer overlords... Three comments about the video:

Fantastic video! Tons of data and lots of background. Love it.

You made an analogy with Canada getting rid of the $1 bill, and I think you indicated that it reduced the number of coins we carry around, but my experience is exactly the opposite. I find that I come home with a pocket full of change every time I go out and use cash...

Second thing, Nvidia, is pronounced, "invidia/envidia". I used to play on the hockey team down in San Jose California.

Again thanks for the great video and great presentation.

scotttaylor
Автор

It's not just flops, it's the adaptive algorithms too ! The structure of dimensions in a neural network ... pattern recognition, nondeterministc weighted resolution 🤔 and memory

raymondtonkin
Автор

Dude your videos are super solid! I’m super impressed with the info and knowledge and slight bit of humor to keep things moving swiftly

Can’t wait to hear some more info!

thefoss
Автор

Well, I am not a computer nerd! But, the parts you explained that I knew, were right, the parts you explained that I didn't know, sounded right! So I am happy with that. Well done.

d.c.monday
Автор

To my understanding at least, with current technology it is impossible to make a chip over a certain size and get perfection. This is what limits GPU sizes, which is way smaller than a wafer. The only way to do wafer size is to design it to work around any and all defects. So they may actually use nearly 100% of the wafers, just with a number of sub-components disabled because of defects.

The reason wafer scale is so important is heat dissipation of interconnects. The reason we have gone so long without GPU chiplets is with all of the interconnects, you can't just distribute the GPU across multiple die to get better performance. Instead you have a multi-pronged interconnect nightmare with one of those problems the shear heat generated in the die to die interconnects outweighs any benefit from spreading across to more die. While there is talk of MCM GPUs from AMD and AMD already has MCM CPUs, the CPUs are done with particular limitations to allow chiplets to work and the issues making an MCM GPU possible have been studied for years and it looks like they may have come up with an acceptable solution to there being a benefit to spreading across multiple die. Wafer scale takes a different approach in that everything is on the same wafer and so the interconnect issue is eliminated at the cost of you have to deal with the defects of neighboring silicon on the wafer instead of chopping everything up and throwing out all of the defective pieces, at least defective to the point where more common designs cannot work.

The only way to dissipate the heat from so much silicon in one spot is through liquid cooling. So there is actually another layer on top which is the water block if I understand correctly. Another great thing about liquid cooling is you can just bring the heat to outdoor radiators and dissipate it. Something I would be interested in is it seems Tesla has high temperatures figured out, allowing them to boost the performance of the power electronics for the Tesla car, so it would be interesting to know what is going on with Dojo to see if they can have a simple high heat load outdoor radiator to cool the supercomputer and thus save a bunch on cooling. Cooling can be quite an expensive process, especially if traditional forced air CRACs are used, so a simple liquid loop with mainly just pumps to move the liquid and fans over the radiators from a power perspective would be a huge power savings. Chilling air to 65 F (or 15 C) and then blowing it over high performance computer parts with crazy high powered fans burns a tonne of power to do, especially if it is 115 F (over 45 C) outside.

ChaJ
Автор

Like I told you before!
I love your commentary very natural and conversational!
Keep it up my man!

oneproductivemuskpm