Liquid Neural Networks | Ramin Hasani | TEDxMIT

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

Well done Ramin, shows nicely that robustness and explainabability (re-traceability, interpretability) are among the core future research topics - a clear thumbs-up from me 🙂

andreasholzinger
Автор

Congratulations on such disruptive and provocative invention! I would like to go deeper on how this technology works and help to understand decisions that prevent the occurrence of errors, like human errors or false positives, how the model allows the logic the liquid network to take the right decision and the machine learning algorithm help to understand the logic and the learning process.

carlosarosero
Автор

I want to build one. I'm so freakin' excited. Thank you for your research efforts, Ramin and team at MIT CSAIL.

NewNerdInTown
Автор

That graph at 2:53 matches the shape of the notional Dunning-Krueger effect graph. Simple patterns are enough early on to get the 80% solution, but as you learn more, you realize there's more you don't know. Only with real expertise do you start to get the best outputs. I think we're seeing the same effect here.

UsedRocketDealer
Автор

I'd like to know how they actually work. Feels like no resource explains it.

BooleanDisorder
Автор

Is the liquid neural network spoken about here the same as the paper Hasani published called Liquid Time-Constant Networks?

gbenga
Автор

This is really amazing… thank you for sharing about it… liquid neural networks…

betanapallisandeepra
Автор

Just listened to your interview with Marketplace Tech!

Mina-bcsz
Автор

Is there a Python library available for NLP tasks? :)

LoyBukid
Автор

I often think about the kind of dissonance that must exist in an ML researcher's mind when enthusiastically training agents to navigate the real world, with a sterile academic mindset, knowing that the technology will inevitably used for indiscriminate violence in the years to come.

arthurpenndragon
Автор

i too prefer having a fruit fly brain sized network rather than a datacenter sized neural network fetching my beer

MuscleTeamOfficial
Автор

19 Neurons for lane keeping is just clickbait. All the heavy lifting is done in the perception layers in your 3 convolutional layers and the condensed sensory neurons.

WizardofWar
Автор

Now my only question is whether these LNNs are as scalable or capable of being put into an architecture that is as scalable as a transformer?

MaJetiGizzle
Автор

The driving example seems to mostly have attention in the far distance, ignoring directly infront and on the sides, which would suggest it would ignore the person or car coming from the side or obstacles directly infront. The example works well with a clear uninterrupted path ahead.

robmyers
Автор

Brilliant work, hope to see it in a transformer model for NLP soon!!

Tbone
Автор

MIT will be the last when it comes to leading research in ai, openai or anthropic / google are light years ahead of these guys

trbt
Автор

Just to clarify a misconception, humans are animals... it makes no sense to say "looking brains but not even human brains, animal brains"

CARLOSINTERSIOASOCIA
Автор

It may be noteworthy that this design was found with a human brain and not with a deep neural network. Giving all the hype it seems necessary to point this out.

MarkusSeidl
Автор

The comparison between "classical statistics" and AI was unexplained and likely misleading. If a speaker compares two graphs, the speaker should explain the underlying math explaining why the graphs differ. The failure to do this, as in this presentation, suggests the speaker has an incomplete knowledge of this field of study and reduces the credibility of the speaker's claims.

AdrienLegendre
Автор

AI 🤖 can create children to achieve smarter design.

frun
welcome to shbcf.ru