Mojo - the BLAZINGLY FAST new AI Language? | Prime Reacts

preview_player
Показать описание
Recorded live on twitch, GET IN

Demo

Fireship

MY MAIN YT CHANNEL: Has well edited engineering videos

Discord

Рекомендации по теме
Комментарии
Автор

If you want any language have good performance numbers, then compare it to Python.

jaysistar
Автор

Cache-line sized vectors being their own type is pretty brilliant idea. It probably allows even better performance compared to manually doing that, but also just reduces typing.

catcatcatcatcatcatcatcatcatca
Автор

It’s not about the size of your SIMD it’s what you do with it

yeahmanitsmurph
Автор

I'm looking forward to mojo. Anything Latner touches turns to gold.

farqueueman
Автор

Really cool that prime uploads these gems. I can't watch the stream since twitch is blocked by my work computer.

markusmachel
Автор

Can’t wait for python programmers to evolve into Mojo programmers who just never use any of the new stuff, but now can say the language they write in is using modern process optimisations and cache-efficient data structures.

Kinda like C++

catcatcatcatcatcatcatcatcatca
Автор

This new watchmojo language is looking really cool, wish I could use it to compile rust

mateusvmv
Автор

Numba, a JIT compiler package for python, seems to do a good portion of what Mojo promises. I regularly get big speedups over numpy using it, particularly because it can auto-parallelize both native python loops and many numpy function calls.

copperz
Автор

Great content as always, keep up the good work man!

zuma
Автор

Hey, on tilling: this is necessary to keep the processor cache hot. The classical example is inverting the index of the two loops in a matrix-vector multiplication. The parallel algorithms for the same operation can be tuned by sizing the chunk of the matrix your are operating on. This becomes even more critical when you add another level of locality by using an accelerator like a GPU or when working in a MPI cluster.

Idlecodex
Автор

Amazing stuff. Still I wonder...how is it fair to compare Mojo with plain python when numpy is basically a part of python itself at this point? Numpy often outperforms even Julia (for large arrays).

spazioLVGA
Автор

f32 is directly supported in almost all SIMD ISAs. f64 reduces the number of components (in 128 bits, you can have 4 32 bit float, but only 2 64 bit floats).

jaysistar
Автор

did some 5min optimizations by using numpy and got it to be 1400-1800x faster than the example he provided. Still, if i can continue to code in python and make it faster, and have strong types, then i see this as an absolute win lol

djixi
Автор

Good stuff dude, I find your content in the land of devs on YouTube very unique. Keep it up!

rybavlouzi
Автор

If it lowers the difficulty in being able to make code that can run on GPU's and mixed use cases, I'm all for it, still it being signup only feels very weird right now,

ryanfav
Автор

In Rust, you can pin a shared buffer, and dispatch slices from it to each core. That’s basically what I’m expecting that Mojo code to actually be doing.

samhughes
Автор

25:00 "will work on exciting projects like Excel spreadsheets, data entry, and *building hyper-intelligent armed robots* "

nekomakhea
Автор

Python is chosen because of its ease of use and libraries which take care of things for us. If we add all these specialist language constructs back into it, have we just undone that ease of use; is it still easily understandable; or does it provide a reasonable pathway from noob to expert?

fanshaw
Автор

Anyone else notice that their Python performance benchmarks are for Python 3.10? Python 3.11 is supposed to have some major speed improvements.

ruanpingshan
Автор

Basically, if you exclude a chunk of what Python can express, what remains can be made very efficient. So add a little syntax to allow you to ring-fence stuff that you want optimised. Makes a lot of sense.

Chalisque