The First Neural Networks

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

I’m in ML since 2013 and have to say: wow… you and your team do really deserve praise for solid research and delivery. I’ll bookmark this video to point people to. Thank you

dinoscheidt
Автор

Please continue the story. A cliffhanger like that deserves a sequel!
Seriously, this was a truly impressive video and I learned new things from it.

strayling
Автор

One of the top education channels on YouTube for sure

fibersden
Автор

Got this recommended to me after getting my first digit recognition program working. The neural networks know I’m learning about neural networks

soanywaysillstartedblastin
Автор

Your videos sre always well worth the time to watch them, thanks!

PeteC
Автор

The one name missing from this from my high-school memory is Norbert Weiner, author of "Cybernetics". I do remember a circa 1980 effort of mine to understand the implication to my area of training (medicine) of rule-based AI. The Mycin program (infectious disease diagnosis and management) sited at Stanford could have been the seed crystal for a very useful application of the symbol-based methods. It wasn't maintained and expanded after its initial development. Took too long to do data input and didn't handle edge cases or apply common sense. It was, however, very good at difficult "university level specialist" problems. I interviewed Dr Shortliffe and his assessment was that AI wouldn't influence the practice of medicine for 20-30 years. I was hugely disappointed. At the age of 30 I thought it should be just around the corner. So here it is 45 years later and symbolic methods have languished. I think there needs to be one or more "symbolic layers" in the development process of neural networks. For one thing it would allow insertion of corrections and offer the possibility of analyzing the "reasoning".

dwinsemius
Автор

I love seeing Minsky come up as I have a (tenuous) connection to him as he is my academic "great great grand advisor." In that, my PhD's advisor's PhD advisor's PhD advisor's PhD advisor was Minsky. Unfortunately, stories about him never got passed down, I only have a bunch of stories with my own advisor, and his advisor, so it is interesting seeing what he was up to.

MFMegaZeroX
Автор

"A human being without life" hurts too much.

hififlipper
Автор

Interesting that Claude Shannon's observations on the meaning of information being reducible to binary came about at virtually the same time as the early neural networks papers.

Edit - The Mathematical Theory of Communication by Shannon was published in 1948. Also, Herb Simon was an incredible mind.

amerigo
Автор

Gosh, I remember studying physiology in the late 60s when human nervous system understanding was still in the relative dark ages - for instance plasticity was still unknown, and they taught us that your nerves stopped growing at a young age and that was it.
But I had no idea how far they'd come with machine learning in the Perceptron - already using tuneable weighted responses simulatong neurons? Wow!
If they could have licked that multilayer problem it would have sped things up quite a bit.
You mentioned the old chopped up planaria trick - are you familiar with the work of Dr Miachel Levin? His team is carrying the understanding of morphogenisis to new heights - amazing stuff! Thank you kindly for your videos! Cheers.

stevengill
Автор

Thanks Jon, as someone who tinkered with neural nets in the 1980s and 90s, this history connects the evolutionary dots and illuminates the evolution/genesis of those theories & tools we were working with... J

JohnHLundin
Автор

This is the best documentary on this topic i have ever seen. Its so well researched, its like doing the whole wikipedia dive

jakobpcoder
Автор

5:14 Look at this guy, throwing out Principia Mathematica without even name-dropping its author. 😂

tracyrreed
Автор

You always bring up interesting topics. Keep it up, it's great job 👍.

francescotron
Автор

I love listening to this with our current modern context

HaHaBIah
Автор

this was great. A more in depth one will be awesome. The fall and rise of the perceptron. Going from single to multiple layers.

helloworldcsofficial
Автор

Thanks for bringing back memories of the class I took from Minsky and Papert (short, not long a in pronouncing his name) in 1969 just when the book had come out. You filled in some of the back story that I wasn't aware of.

BobFrTube
Автор

As always I love your videos, the depth of knowledge and the people that comment, as they all have interesting stories about what is in your videos. So one of the descendants of the symbolic movement was cognitive architectures like SOAR and ACT-R from Newell's theories of Cognition. Symbolic systems are not gone and they perform many tasks that Neural Networks don't do well. However Neural Networks do something so much better than Cognitive Systems, and that is in getting all the data and knowledge of the world into the neural network, and being able to extract it out. There is no way you can program all of that as rules in symbolic systems. There will be a merger of both systems so they can perform better reasoning and cognitive tasks in the next iteration of all of this. We are really just at the beginning, standing on the shoulders of Giants.

NanoAGI
Автор

The Einstein, Oppenheimer, Bohr, Feynman, Schroeder and Heisenbergs of a.i.. McCulloch-Pitts neuron network, Rosenblatt's training paradigm, took 70 years to get to "here" and should be acknowledged. I remember as a little kid in the 70s reading articles on different people leading the symbolic movement, and thinking "none of them really seem to know or have conviction in what they're campaigning for".

TheChipMcDonald
Автор

Recurrent Neural Networks are about to make a HUGE comeback.

Wobbotherd