Numenta Explained

preview_player
Показать описание
How does Numenta's hierarchical temporal memory system work? I'm going to go over it's technological stack and how it differentiates from deep learning. Numenta is trying to solve Artificial General Intelligence by replicating the human neocortex in silicon.

Code for this video:

Please Subscribe! And like. And comment. That's what keeps me going.

Connect with me:

More learning resource:

Join us in the Wizards Slack channel:

And please support me on Patreon:
Signup for my newsletter for exciting updates in the field of AI:
Рекомендации по теме
Комментарии
Автор

I have so many tabs up right now but I couldn't resist opening up another just for this video.

anteconfig
Автор

Glad to see a video on Numenta. They are way ahead of the game and deserve more attention.

wikasraja
Автор

A typical pattern I find when trying out a new cool library, like Numenta's nupic:

Step 1: Watch tutorial vid like Siraj's and get super excited/motivated to install the library and start playing around with it. Yeah, I got mah coffee, LET'S DO THIS!! PUMPED!
Step 2: Clone the github repo to your machine and >> pip install nupic
Step 3: Get error message "Command "python setup.py egg_info" failed with error code 1 in
Step 4: Google the error message, check StackOverflow and the Numenta github issues section for error message.
Step 5: Try every proposed "solution" and "fix" for error, all in vain.
Step 6: Spend literally hours of a Saturday morning uninstalling, creating new virtual environment, reinstalling, getting same error message. Frustration begins.
Step 7: Open new issue in Numenta's github. Await response anxiously. While waiting, double check your prior attempts are still yielding error message.
Step 7: Saturday morning turns into Saturday afternoon. Girlfriend gets upset you've been spending the whole day on the computer. Frustration mounts.
Step 8: Maybe the issue lies somewhere with setuptools or easy_install. Nope, same error message. Go for a walk or run.
Step 9: Try some more. Literally ready to throw laptop out of window.
Step 10: Saturday evening. Give up. Defeated, and not for the last time, by the Herculean task of trying to install, configure and run a a package in the open source world.
Step 11: Curse to self as you realize you just wasted an entire day, with nothing to show for your effort. :-(

RedShipsofSpainAgain
Автор

There will come a day when Siraj simply throws a Wikipedia article to a GAN and it will make his video all by itself!

vijayabhaskar-j
Автор

Hierarchical Temporal Memory (HTM) is nothing like an autoencoder except its ability to reconstruct the inputs. Despite its name, it usually doesn't have layers as hierarchy. Also, I'm a little disappointed about how you lack the explanation of the Temporal Memory. It's the core algorithm of HTM and the reason why HTM works 'ok' without layers and why the neurons have predictive state. Currently, HTM does not model movement, but Numenta have ideas about how it'll work in the near future. And Hiton's idea of capsule quite differs from Numenta's column. Their function is completely different and only structurally similar as they're both inspired by columns in the neocortex.
P. S. Sorry for my bad English. I'm not a native English speaker.. XD

hyunsunggo
Автор

thank u Siraj. Great explanation. HTM is our university project and our professor spend around 7 hours to cover this topic. But his way of explaining was far away from u, I tried to get the point by HTM school videos also. But just by watching this short video from u I finally get what is HTM about. Again thanks you're great man :)

daianaa_ir
Автор

Just the sort of thing I was about to go looking for. Perfect timing.

cheerfultrout
Автор

thank you for numenta video, i have been waiting years for this

larryteslaspacexboringlawr
Автор

Grid cells concept (Numenta's new framework) - is it being implemented in HTM ?

Stan_
Автор

Siraj! You inspire us! This is some quaLITy content! Keep up the awesome work, bruh!

sagardilwali
Автор

Great improved idea mixing all together different concepts and adding new ones. I think deep learning still doing very great advances since its beginning and still surprise me!
Good to see this videl after all blockchain boom we have these days.

javisartdesign
Автор

Another great talk from Siraj. I am going to watch more Siraj's videos.

Stan_
Автор

Nice vid! By the way, recently I started paying close attention to spiking nets and neuromorphic architecture. Love this channel.

artemkovera
Автор

Funny, I seem to remember you dropping Numenta's name not so long ago as a little bit of a dig against companies that make big claims without any results, but it seems you have changed your mind a little on this one?

TheApeMachine
Автор

I finished Andrew Ng's Coursera course about a month ago. Highly recommend it. Andrew is a genius at making complex things really simple to understand.
There were also a number of things I've not come across before.

tonycatman
Автор

HTM and SDR are very interesting technologies. It looks like they are able to catch a couple of ideas that live at a very deep level in our mind. Something that is really profound.

Unfortunately, so far Numenta has not been able to hit the market with a product that clearly demonstrate the possibilities offered by these technologies. We do not have any Numenta-based competitor of Google Translate, Google Image Search or Soundhound, yet.

I hope that more people will get involved in Numenta's effort in the future and that those people will create some impressive product.

Meanwhile, many, many thanks to Siraj for having covered this interesting technology in such a clear and engaging way.

AlessandroBottoni
Автор

Hi Siraj, great video! I would like also to point out that back-propagation itself is a variant of Hebbian learning! The idea of "neurons that fire together, tied together" could be applied to labels and inputs as well (where BPP tries to tie inputs to labels). If you try to derive the update rule for one layer of a neuron network, you will realize that it's Hebbian (with some normalization factors not so different from Oja's and BCM). With more layers, what BPP achieves is a chain of layer-wise gradients multiplied together (cascade of correlation), which is like a generalized version of Hebbian in a sense.

In fact, Hebbian learning is the basic building block for modern learning algorithms. Modern learning algorithms are about memorizing patterns of the input in weights. And what Hebbian does is to move the weights toward the patterns of the inputs that relevant (when the outputs are active). That's the intuition behind machine learning!

pvirie
Автор

You recommended a book around 7:18 but it was difficult to catch the name (or author)... Can you give the name and/or maybe make a video on "great books" ?

erol
Автор

Why in the Walkthrough.ipynb file values 3 and 4 are the same when they encoded?

bydlobydlo
Автор

How does it differ from Recurrent Neural Nets and is there a performance enhancement possibility in Numenta's HTM?

deconvfft