Liquid Neural Networks, A New Idea That Allows AI To Learn Even After Training

preview_player
Показать описание
Daniela Rus currently serves as the Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Rus is a renowned Andrew (1956) and Erna Viterbi professor at CSAIL. With a passion for advancing the field of robotics, Rus has made significant contributions to areas such as autonomous vehicles, swarm robotics, and distributed algorithms. Her research and leadership have earned her numerous accolades, establishing her as a prominent figure in the world of robotics and artificial intelligence.

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

Stay Connected

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.
Рекомендации по теме
Комментарии
Автор

Kudos Daniela Rus and Computer Science and Artificial Intelligence Laboratory (CSAIL) team at MIT! Excellent work and innovation!

LindsayHiebert
Автор

No paper link no other links. No name for the paper. Thanks Forbes.

justinlloyd
Автор

I don't understand anything but I completely agree with her.

thearchersb
Автор

Getting strong “Any sufficiently advanced technology is indistinguishable from magic” vibes right now thanks to the wizards at MIT. Wish I was smart and dedicated enough to learn what's happening here. Looks amazing.

JChen
Автор

This will make learning faster and better networks, . the vision to detect object seems to be much more clear. Hope this will be out soon. Or may be we need to push this to Tensorflow or Pytorch soon for easy accessibility with wide frameworks. The more experiments get performed using this, the better outcomes can be seen in real world.

crackrule
Автор

Curious how the field will receive this. Let's get her on Lex Fridman!

philforrence
Автор

Really important work, does it scale to 1000x neurons? Cooperative networks?

francisdelacruz
Автор

What is the difference between neural network and liquid neural network?
Unlike traditional neural networks that only learn during the training phase, the liquid neural net's parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data.Apr 19, 2023

Viewpoint
Автор

It is unbelievable what they managed to do with 20000 parameters I must learn this technique fast

qwrxdqn
Автор

CSAIL is the premiere AI lab at MIT! I know because I worked there developing their AI infrastructure 😂 I really dig this experiment and talk

superuser
Автор

This is the innovation in AI that is going to change our world beyond recognition.

KevonLindenberg
Автор

Wow from 100 000 to 19 neurons!
Can those liquid neurons be similarly scaled??

j.d.
Автор

It's interesting how this team has been talking about this invention for over a year, and yet has failed to gather significant attention despite the revolutionary qualities of liquid neural networks. Perhaps there is a catch that they are not telling us about?

thorvaldspear
Автор

Unbelievably ground breaking from lay view. I was just saying the other day that there had to be a better way that redefines the NN

mahmga
Автор

Amazing. Moving human targets can be tracked by drones independent of place and season. Isn't that what we all have been waiting for.

joeriben
Автор

No link for the original paper in the description?

MathPhysicsEngineering
Автор

very cool. I look forward to the many applications this can be used in. Thanks for sharing.

energyeve
Автор

This feels like the brain is trying new neurons to better improve its functioning!

vladyslavkorenyak
Автор

Can someone please inform me of the advantages of LNNs (if it can be used) on diffusion models such as Stable Diffusion, DALL-E and Midjourney?

If I am right, these diffusion models use DNNs?

SilenceOnPS
Автор

How does this network react when confronted with outside noise that directly affects the trained task? How does this compare with the other forms of networks? Thank you. I’d love to know more

LearnAINow