A Fruitful Reciprocity: The Neuroscience-AI Connection

preview_player
Показать описание
Dan Yamins, Stanford University

Abstract: The emerging field of NeuroAI has leveraged techniques from artificial intelligence to model brain data. In this talk, I will show that the connection between neuroscience and AI can be fruitful in both directions. Towards "AI driving neuroscience", I will discuss a new candidate universal principal for functional organization in the brain, based on recent advances in self-supervised learning, that explains both fine details as well as large-scale organizational structure in the vision system, and perhaps beyond. In the direction of "neuroscience guiding AI", I will present a novel cognitively-grounded computational theory of perception that generates robust new learning algorithms for real-world scene understanding. Taken together, these ideas illustrate how neural networks optimized to solve cognitively-informed tasks provide a unified framework for both understanding the brain and improving AI.

Bio: Dr. Yamins is a cognitive computational neuroscientist at Stanford University, an assistant professor of Psychology and Computer Science, a faculty scholar at the Wu Tsai Neurosciences Institute, and an affiliate of the Stanford Artificial Intelligence Laboratory. His research group focuses on reverse engineering the algorithms of the human brain to learn how our minds work and build more effective artificial intelligence systems. He is especially interested in how brain circuits for sensory information processing and decision-making arise by optimizing high-performing cortical algorithms for key behavioral tasks. He received his AB and PhD degrees from Harvard University, was a postdoctoral researcher at MIT, and has been a visiting researcher at Princeton University and Los Alamos National Laboratory. He is a recipient of an NSF Career Award, the James S. McDonnell Foundation award in Understanding Human Cognition, and the Sloan Research Fellowship. Additionally, he is a Simons Foundation Investigator.
Рекомендации по теме
Комментарии
Автор

Ai videos -moment switching looks exactly how dmt entities change their forms and shapes

quzkpvk
Автор

I think this way of looking at the brain to model computer neural networks omits the key difference between brains and computers. Brains have discreteness built in which makes the process of learning to identify patterns and shapes along with relationships between them much easier. Computers have no intrinsic means of generating discrete elements to distinguish one element from another, such as in a collection of pixels. Therefore the computer can never match the way the brain learns things because of that lack of discrete data encoding that is based on bio molecular values. (To see this best, look at the cells in the skin of a camouflaging octopus).

So the fundamental behavior of computer neural networks is building a model that approximates the base classifier or set of classifiers (dog, cat, human) that you want to use as part of identification. Because without that base classifier there is no way to identify anything in a computer imaging pipeline. That is why unsupervised learning doesn't work because there are no base models to compare against. And this is where the contrast approach seems to work, but even there, it doesn't have the fidelity and flexibility of the way human brains work. Local aggregation is a mathematical approximation totally different to how brain neural networks work. A child will still be able to distinguish two dogs based on the type of fur, color of fur and other discrete characteristics that a computer neural network has no way of understanding innately. Because these unsupervised are still generalizing a high level classifier, such as dog, versus really understanding all the characteristics and elements that make up a dog: legs, tail, fur, ears, snout, tongue, etc.

Ultimately all computer neural networks operate on a mathematical model that tries to generate discreteness through classifiers based on computational processing. That imposes a cost that doesn't exist in biology at a far lesser degree of fidelity and detail. Brains don't have built in previously trained classifiers for things

willdmindmind
Автор

I like the parsimony approach. Not sure if i get this right but couldn't a working type of memory then selectively grant access to lower level *-topic maps in parallel for feedback in so called higher brain functions? The foundational model delivering the mappings and basic functionality for higher brain functions to access and optimize (learn) target functions, whichever are useful in a social context, thus in light of evolution stabilize genetics.
Some more months and CAPTCHAs won't work anymore.

If those evolutionary parameters are hard-coded, shouldn't there be genes markable/knockable during development determining the connection strength?

hyphenpointhyphen