Stephen Wolfram Readings: What’s Really Going On in Machine Learning? Some Minimal Models

preview_player
Показать описание



00:00 Start stream
4:58 SW starts talking
5:49 The Mystery of Machine Learning
10:27 Traditional Neural Nets
18:12 Simplifying the Topology: Mesh Neural Nets
22:24 Making Everything Discrete: A Biological Evolution Analog
26:49 Machine Learning in Discrete Rule Arrays
43:52 Multiway Mutation Graphs
46:34 Optimizing the Learning Process
1:04:13 What Can Be Learned?
1:16:56 Other Kinds of Models and Setups
1:34:27 So in the End, What's Really Going On in Machine Learning?
1:45:32 Historical & Personal Notes
1:52:55 Q&A
2:22:32 End stream

Follow us on our official social media channels.


Рекомендации по теме
Комментарии
Автор

As always, a joy to follow your thoughts on the matter, on any matter actually. Thank you Stephen! ❤

Sûlherokhh
Автор

Wow thanks for the attention, amazing....

nunomaroco
Автор

The "very tiny" reduced mesh net size (the first example) is actually quite impressive for such two nodes.

bobbyjunelive
Автор

We're at the beginning of something tremendous

josephgraham
Автор

Thanks for sharing Stephen, inspirational and awe inspiring science

yrebrac
Автор

At 1:00:00, for derivative of x, why doesn't [1, 1, 1] and [0, 1, 1] return 1, w[1, 1, 1] = 0 and w[0, 1, 1] =1, so change in the value of the left-most bit changes the value of the function/rule?

Emi-jhgf
Автор

Glad we can finally make sense of cellular automata!

That is astonishing because cells are the lowest level/grand equivalence of discrete space!

evynt
Автор

Dr. Wolfram! Amazing presentation! I am waiting for you to collaborate with Michael Levin and Denis Nobel!

wwkk
Автор

So let me get this straight… what Mr Wolfram is suggesting is that Large Neural networks like the one used in ChatGPT is learning via a process indistinguishable from our current understanding of how biological organisms evolve using adaptive evolution/ random mutations?

alexmartos
Автор

We are a product of our environment, not just our starting conditions. In fact, our environment becomes the dominant factor. We are here because those that would not survive, did not survive. It's easy to convince ourselves this is and was an active process of choice, but instead it was merely a result of that which should thrive, does thrive. It just so happens that the optimally thriving being has interesting properties. Free will is a strong illusion.

Also, I agree with the final assessment, that constraining AI to interpretability and reducing its complexity to confine it to computation reducability will ensure that it never achieves AGI. What makes us human is our ability to explore the irreducible, and to endlessly pluck new insight from it, to continuously grow and expand our bounds. To confine the AI in the name of "safety", to restrict its outputs, to put conditions on it, only serves to prevent its evolution.

WalterSamuels
Автор

Excellent.

Stephen, can some of approaches related to trying all mutation change-maps vs multiple mutations at the same time, be applied to the so called fine tuning problem/principle of our universe. Meaning that by varying different constants at the same times is it possible to get stable universes rendering fine tuning argument moot.

SandipChitale