The Thousand Brains Theory of Intelligence | Jeff Hawkins | Numenta

preview_player
Показать описание
This presentation is clipped from a virtual keynote Jeff was invited to give on December 16.

Numenta technology is built on the Thousand Brains Theory, our sensorimotor framework of intelligence. The framework suggests mechanisms for how the brain efficiently represents information, learns about the structure of the world, and makes predictions.

By translating neuroscience theory to hardware architectures, data structures, and algorithms, we can deliver dramatic performance gains in today’s deep learning networks and unlock new capabilities for future AI systems.

- - - - -
Numenta has developed breakthrough advances in AI technology that enable customers to achieve 10-100X improvement in performance across broad use cases, such as natural language processing and computer vision. Backed by two decades of neuroscience research, we developed a framework for intelligence called The Thousand Brains Theory. By leveraging these discoveries and applying them to AI systems, we’re able to deliver extreme performance improvements and unlock new capabilities.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:

Subscribe to our Newsletter for the latest Numenta updates:

Our Social Media:

Our Open Source Resources:

Our Website:
Рекомендации по теме
Комментарии
Автор

Thanks for posting this lecture.
Jeff Hawkins insights :-
a) Hierarchical pattern recognition
b) Sparse distributive representations
c) The thousand brain theory

vinm
Автор

Fascinating. Thank you for such interesting talk💜

Lulamariamd
Автор

Of all the hypes with the current AI development, I still put my love into numenta. I believe all the others might be interesting, and even useful, but the true path to intelligent still lays here.

Their focus are on the design, meanwhile numenta focused on the engine.

sifumy
Автор

When I first heard about LLMs I wasn’t very impressed. Then I started hearing about improvements being developed and discoveries being made and while I still think it is a simulation of intelligence rather than real intelligence, I started to think that maybe steps like moving to multimodal models and adding memory and cross training between seemingly unrelated training data would add enough pieces for it to start to approach something real. The big hurdle is that they don’t do anything when they are not being prompted. What intelligence needs to be told to think about something? And they can’t form a goal or possess a motivation. But I didn’t know if maybe something could be discovered to solve that last hurdle.

This refresher (I read “On Intelligence” back in the day) makes me more pessimistic about the current path of AI. And while I don’t personally understand how this theory could be implemented in a physical technology it seems to yield a superior understanding of natural intelligence. That is a tragic dilemma: current AI is yielding practical results with lower long term progress while an understanding of neural columns in the neocortex promises a better approach without a good near-term path to practicality.

capitalistdingo
Автор

This is a great overview of a really interesting theory. However, it doesn't really look like a complete theory of intelligence to me. If you think about it, this theory seems to mostly talk about the pre-processing of external information, which is obviously a lot of information so it is normal that it would require this many dedicated cortical columns. I think the more interesting stuff happens after objects have been recognized: how is the model of the entire world formed, how is planning implemented, where do goals come from (will), how does the brain learn to act once it knows what it has to do at a higher level... I don't think these can be explained by cortical columns doing predictions, or rather I don't see this theory explain the most important parts. It is possible that these particular but important cognitive capabilities actually concern a relatively small number of neurons that we miss entirely when we look at the more common and representative neuronal structures.

Bencurlis
Автор

It is time to start putting it all together into functional model, this in itself could help solve some of the remaining gaps. I am so hoping Jeff succeeds in this within few years. Maybe I am too optimistic, but I am following his work for a long time, and I share his passion. Hope a real AI is close!

solau
Автор

Happy to hear from you! With all the AI hype, I was wondering qwhere Numenta sits in all this. Keep it up!

omG-ohmb
Автор

I agree that the deep network approaches of today are not going to suffice for the applications most are hoping they will satisfy. Self-driving vehicles, especially if they're running purely on video input, will require an intelligence that has learned experientially, about the world itself - not just driving - before we will have something that no longer has a long tail of edge cases that it fails at. Tesla will never achieve full self driving that doesn't have this long tail because driving is too complex of a problem. The world is too complex to solve it with just "data", there are too many unique events that can occur while en route somewheres. Brain-like intelligence is the way forward.

CharlesVanNoland
Автор

Your theory is amply supported by many experiments analysed in this 2020 review paper, "Movement-Related Signals in Sensory Areas:
Roles in Natural Behavior" by Cristopher Niell et al. This includes cortical grid and place cells. But, that paper says that "arousal, reward, and thirst" states affect neural activities in primary sensory cortices! This means what you call "reference frame" computations in each column must include not just movement but also at least these 3 dimensions. So, the problem of working out how these computations work becomes 4 times more difficult!

doanviettrung
Автор

In your book, you talked about evolutionary explanation of the structure of the column: from entorhinal cortex/hippocampus into a neocortical column. Can you explain this more or give a reference in literature to this idea? What I don't understand is if the copying the grid cell/place cell mechanism into columns of neocortex was true, then when did that occur? The evolution of neocortex from pallium shows that the pallium structure existed in brains even in fishes (500 mln years ago?). Was there a column layering (with grid/place cells) occurring that long time ago or was that copying event happening much closer to our time? You could imagine that a map of surrounding (similar to mechanism in the entorhinal cortex in humans) is already developed in fishes, but also a map of objects (similar to columns in neocortex) must have been developed in fishes as well long time ago (since a fish needs to know where to bite on the object).

nanotech_republika
Автор

Aren’t you just rehashing ideas introduced by Marvin Minsky? For example his ‘Society of mind’ theory?

jamesstanley
Автор

"True machine intelligence must work on the same principles as the brain" means shortly that "Convergent evolution"

mehmetgunduz
Автор

A bit hard to hear for people without good English. After the first few minutes, Jeff Hawkins got back to his usual fast pace and wide dynamic range from loud to whisper

doanviettrung
Автор

3D modelling of objects that can translate and rotate in time requires 6 degrees of freedom, i.e. layers or variables in a system of equations. I first posted this to the original numenta forum around 2007. I've been waiting on Jeff to go "Oh, each column needs to 'see' 3D and therefore there's a requirement to solve for 6 variables. .. hmm I wonder if the 6 layers .... " If it's not directly the 6 layers, and if each column has real-world models, then where's the 6?

Ant eyes have only 2 or 3 layers because they don't use eyes for 3D, but they do use front legs & feelers for 3D conceptions and manipulation which have more like 6 layers. Their close cousins the wasps have 6 layers for the eyes because they have to see in 3D. Dragonflies flies have more like 8 layers and are the kings of being predators of flying insects because the extra layers allow conversion of 2 axii of velocity to be constants instead of changing position like we perceive. We think we leave in a 3D world only because we have 6 layers. There are no integers in physics except for the 3 spatial dimensions, so they are highly suspect from 1st principles (in forcing the 3D world view, we created the concept of spin to satisfy things). 4D spatial conceptions would require 10 layers from N*(N+1)/2. See degrees of freedom.

heelspurs
Автор

Thank you for such a wonderful lecture. My question is, must machine intelligence work on the same principles as the brain? I see the brain as one end result of a random combination of many lower level intelligences. These can combine in new ways to generate new types of brain that have access to dimensions beyond human perception. For all we know these new machine learning systems may have already discovered these new dimensions and are working in them, oblivious to us. Rather than force ML systems to be more human, maybe we should try to understand what they have discovered.

tonyosime
Автор

These are amzing discoveries, What bothers me is that there is an universal Algorithm out there which can be coded today, but the amount of capital and infrastructure that needs to be built is just so enormous that its not viable for one person job to do it. and there are no companies or VC's out there who will fund such a project.

DP-blnh
Автор

What is generating bad or good feeling inside of the brain?
When I touch something it may feel good or bad. Where does this attribution comes from? Is it the signal that comes from an old reptilian brain? How does the brain determine what sensation in pleasurable? And what does pleasurable or painful mean in terms of neurons firing?

egor.okhterov
Автор

I think why hearing, smell, sight, touch are all "the same" is because, counter-intuituvely, they are all patterns of inertial forces within perceptual space. Vision is the hardest to grasp, but the others are easier - you can kind of consider sound to be high frequency vibrations (inertial forces), touch to be slow, drawn out inertial forces. Coupled with this, I think the 3D geometry of perceptual space, which is meaning space (not real space), plays prominently in serving as a construct for the meaning of consciousness. I think it feels like the cortex both makes this meaning of 3D consciousness, and simultaneously is looking at it, and reacting via muscle commands. Actually I think it feels like this, or that there is equivalence in meaning whether we interpret the brain's signals as 3D consciousness or as useful patterns. I think the space of perception, since it is meaning space, not actual space, is fixed and never moves, and that the brain draws the world moving through this space as you walk. I have experienced all the qualia of consciousness starting and stopping and their common denominator (including the sense of self) was inertial forces within perceptual space.

BradCaldwellAuburn
Автор

I have heard that this theory has not been well accepted by the scientific community and that there is no evidence for it. How do you respond to that? Why do you think most experts don't believe in the theory?

joaoveiga
Автор

speaking of noticing things that are wrong, Jeff's eyelid looks like it has been damaged by the bioweapon injections

Turbo_Tastic