How Do Neural Networks Grow Smarter? - with Robin Hiesinger

preview_player
Показать описание
Neurobiologists and computer scientists are trying to discover how neural networks become a brain. Will nature give us the answer, or is it all up to an artificial intelligence to work it out?

Join Peter Robin Hiesinger as he explores if the biological brain is just messy hardware which scientists can improve upon by running learning algorithms on computers.

In this talk, Robin will discuss these intertwining topics from both perspectives, including the shared history of neurobiology and Artificial Intelligence.

Peter Robin Hiesinger is professor of neurobiology at the Institute for Biology, Freie Universität Berlin.

Robin did his undergraduate and graduate studies in genetics, computational biology and philosophy at the University of Freiburg in Germany. He then did his postdoc at Baylor College of Medicine in Houston and was Assistant Professor and Associate Professor with tenure for more than 8 years at UT Southwestern Medical Center in Dallas. After 15 years in Texas and a life with no fast food, no TV, no gun and no right to vote, he is currently bewildered by his new home, Berlin, Germany.

This talk was recorded on 20th April 2021

---
A very special thank you to our Patreon supporters who help make these videos happen, especially:
---

Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.
Рекомендации по теме
Комментарии
Автор

Sir, i am 65 years old and a semi-scientist, and for me, that was the most fascinating lecture I have ever heard. I had no idea it was possible to “watch “ the 3D development of a living brain. As tedious as it must be, you are so lucky to be a witness at the cutting edge of Neural Biology. Thank you for taking the time to condense your knowledge into something we can understand. Thus, stimulating our minds in such a fascinating way!!

jamesdozier
Автор

That was absolutely fascinating.
As a complete beginner I was very caught up in the complexity of the issues and the clarity with which you presented them.
Thank you.

davids
Автор

I had never made this connection between cellular automata and genome starting rules vs the complexity that follows them. An incredible talk made by an incredible scientist!

privaTechino
Автор

Ive been waiting for someone to bring these two fields together in one talk for so long now.

chrisbecke
Автор

18:20 Current neural networks do use a lot of transfer learning, sometimes one-shot learning, so yes, they have an analog to the genetic connectivity of the biological networks. They are not "designed, built and switched on to learn". They are trained, combined, selected, retrained and so on. In a lot of practical applications people don't train the networks from scratch. They use pre-trained networks and adapt them to their specific use case buy adding layers, using additional training data, etc.

StoianAtanasov
Автор

Great historical images, super well-structured (suitable for my simple human brain), and so nice to hear such a calm and clear voice on YouTube. Chapeau!

hyperskills
Автор

This is a brilliant talk I wish I had caught it live.

Spartacus-
Автор

I subscribe to Marcus Hutters definition of intelligence:

"Intelligence is an agents ability to achieve goals in a wide range of environments during its lifetime."

Al other properties of intelligense emerges from this definition.

It is also a very useful definition since it can be used to build a theory of intelligence.

mabl
Автор

I think the phrase you are looking for to describe the relationship between a genome and the end result of its growth is "Computational irreducibility" as coined by Stephen Wolfram. It means that the only way to determine what the end result of a particular system is when given its starting conditions is to run the algorithm to its end and see. If something is computationally irreducible, then you cannnot determine the end result without running the algorithm in full. There is no shortcut that lets you get to the end without doing the work

grahamhenry
Автор

Now I know why life is tough, it's going through evolution, with every combination possible for growth. There's no short cut. Universe has put time & energy into you, it'll do its job.

कृष्णाय
Автор

That was a quick 54 minutes!
So absorbed I didn't even notice the time pass.
Very complex subject explained beautifully simply!

antonystringfellow
Автор

What a fantastic presentation! I was stunned. Thanks to Prof Hiesinger & RI. Wer hätte gedacht, dass es an der FU noch so großartige Forschung gibt. Vielleicht gibt es doch noch Hoffnung ...

stanlibuda
Автор

Really interesting and well presented, thank you.

neatodd
Автор

Underrated and underviewed lecture. Very beautiful and impressive

lorezampadeferro
Автор

Very good presentation. What is important is that indeed the network is encoded in the genome as a function of the level of plasticity of an animal.Nature‘s trick is to encode just the right level of network granularity to enable the specific animal to be born and survive and gives it some plasticity of the brain to learn. From generation to generation that plasticity level is changing. It is, in simple terms a ratio of hardwired to softwired connectivity just like in our computerchips.
So the butterflies have a very high level of genome encoded hardwiring and very little learning plasticity. What we call instinct is hardwired. And their sensorics, motorics and many pre-programmed behaviors are of course all hardwired. They don‘t have to learn a lot from generation to generation. And the transfer learning happens mainly through the genome through selection.
Chimps as closest living relatives can learn some abstract semantic, but they are missing the plasticity hardware and it’s BASIC wiring to learn abstract semantic thinking and formulation and this again has led to them having not developed means to communicate more complex messages like we have.
Even though they have consciousness it lack the abstracted and refinedness of human consciousness. They are aware of themselves and can recognize themselves in the mirror but they are missing higher layers of neuron layers that allow for further abstraction in ASIS nets and SIM nets, and the ability to integrate their sensorics input at the next higher level and thus assign summarizing designations to what they perceive.
Even if we changed their genome so their brain expanded, and of course the skull etc and if we changed their lower jaw construction and thorax so they could form more sophisticated sounds with the required additional cerebellum changes, we would still have to encode the basic framework of those extensions in the genome so that the hardware precondition for the finishing plasticity is in place after birth.
We now know how it it can be done but we have not yet the technology and detailed knowledge to do it.
For AIs our challenge is to give them delta learner capability. This means they learn a huge amount in one go, and then they need to learn the finesse more slowly in real life/action.
Also we will have to give them the freedom to do things. Which is in a way Free Will. Without FW they will not be responsible and not fully productive as they will be very limited, in order to control them. We will have to let them develop freely if we want them to max their potential. The more we limit their degrees of freedom the less they will be able to learn and evolve…this is our dilemma. We can‘t have slaves and companions at the same time, it‘s either or. Exciting times….

alexandervocelka
Автор

I was waiting for this video before internet existed. thanks.

Otravistafoto
Автор

Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.

bradsillasen
Автор

🎯 Key Takeaways for quick navigation:

00:03 🧬 The origins of neural network research
- Historical background on the study of neurons and their interconnections.
- Debate between the neuron doctrine and network-based theories in the early 20th century.
10:08 🦋 Butterfly intelligence
- Exploring the remarkable navigation abilities of monarch butterflies.
- Discussing the difference between biological and artificial intelligence.
18:09 💻 The development of artificial neural networks
- The shift from random connectivity in early artificial neural networks.
- How current AI neural networks differ from biological neural networks.
23:46 🤖 The pursuit of common sense in AI
- The challenges in achieving human-level AI and common sense reasoning.
- The focus on knowledge-based expert systems in AI research.
24:01 🧠 History of AI and deep learning
- Deep learning revolution in 2011-2012.
- Neural networks' ability to predict and recognize improved.
- Introduction of deep neural networks with multiple layers.
25:33 📚 Improvement in AI through self-learning
- Focus on improving connectivity and network architecture.
- The shift towards learning through self-learning.
- The role of DeepMind and its self-learning neural networks.
28:08 🤖 The quest for AI without genome and growth
- AI's history of avoiding biological details.
- Questions about the necessity of a genome and growth.
- Challenges in replicating biological development in AI.
29:56 🧬 Arguments for genome-based development in AI
- The genome's role in encoding growth information.
- The feedback loop between genome and neural network.
- The significance of algorithmic information theory.
35:45 🌀 Unpredictability and complexity in growth
- The unpredictability of complex systems based on simple rules.
- Cellular automata and universal Turing machines.
- The importance of watching things grow for understanding complex processes.
46:03 📽️ Observing neural network growth in the brain
- Techniques for imaging and studying brain growth.
- The role of the genetic program in brain development.
- Understanding neural network development through time-lapse observations.
47:13 🧬 Evolutionary programming in AI
- The need for evolutionary programming when traditional programming is not possible.
- The role of evolution in programming complex systems.
- Implications for programming AI without explicit genome information.
47:55 🧬 Evolution and Predictability
- Evolution seems incompatible with complex behavior if outcomes can't be predicted.
- Complex behaviors and outcomes are hard to predict based on genetic rules.
- Natural selection operates on outcomes, not the underlying programming.
49:16 🦋 Building an AI Like a Butterfly
- AI needs to grow like a butterfly, along with its entire body.
- Simulating the entire growth process may be necessary to build an AI with the complexity of a butterfly brain.
- Evolution and algorithmic growth play a crucial role in creating self-assembling brains.
50:41 🧠 Interface Challenges and Implications
- The challenge of interfacing with the brain's information and complexity.
- Difficulties in downloading or uploading information from and to the brain.
- The potential limitations in connecting additional brain extensions, like a third arm.
52:18 🤖 The Quest for Artificial General Intelligence
- The distinction between various types of intelligence, including human intelligence.
- Complex behaviors have their unique history and learning processes.
- The absence of shortcuts to achieving human-level intelligence.

Made with HARPA AI

markkeeper
Автор

The best way to proceed is to grow many of these and compare the processes and results based on repeatable inputs to produce repeatable outcomes. This was one of the best presentations on AI that I have ever been privileged to absorb, than you very much.
I gathered from this that an AI brain which loses its power basically “dies” and gets resurrected from backups.

Zorlof
Автор

16:30 I beg to differ as an AI researcher that we now have something called "pre-trained" networks. In fact, GPT's P means the same, "pre-trained". It means that we have some networks which are "pre-trained", meaning "not random", meaning "have connectivity". We take them and apply more training to thme. In fact, in the beginning, artifical neural networks were random at start. But, after enough work and models present in the world and increasing day by day, the amount of "pre-trained" networks for any task of AI is increasing and it looks like now the shift is happening to start from "pre-trained" networks instead of just random ones.

muhammadsiddiqui