THE COMPUTATIONAL UNIVERSE: MODELLING COMPLEXITY - Stephen Wolfram PHD #52

preview_player
Показать описание
Does the use of computer models in physics change the way we see the universe? How far reaching are the implications of computation irreducibility? Are observer limitations key to the way we conceive the laws of physics?
In this episode we have the difficult yet beautiful topic of trying to model complex systems like nature and the universe computationally to get into; and how beyond a low level of complexity all systems, seem to become equally unpredictable. We have a whole episode in this series on Complexity Theory in biology and nature, but today we’re going to be taking a more physics and computational slant.
Another key element to this episode is Observer Theory, because we have to take into account the perceptual limitations of our species’ context and perspective, if we want to understand how the laws of physics that we’ve worked out from our environment, are not and cannot be fixed and universal but rather will always be perspective bound, within a multitude of alternative branches of possible reality with alternative possible computational rules. We’ll then connect this multi-computational approach to a reinterpretation of Entropy and the 2nd law of thermodynamics.
The fact that my guest has been building on these ideas for over 40 years, creating computer language and Ai solutions, to map his deep theories of computational physics, makes him the ideal guest to help us unpack this topic. He is physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his computer science intuitions at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A New kind of Science”, and more recently “A Project to Find the Fundamental Theory of Physics”, and “Computer Modelling and Simulation of Dynamic Systems”, and just out in 2023 “The Second Law” about the mystery of Entropy.
One of the most wonderful things about Stephen Wolfram is that, despite his visionary insight into reality, he really loves to be ‘in the moment’ with his thinking, engaging in socratic dialogue, staying open to perspectives other than his own and allowing his old ideas to be updated if something comes up that contradicts them; and given how quickly the fields of physics and computer science are evolving I think his humility and conceptual flexibility gives us a fine example of how we should update how we do science as we go.

What we discuss: 
00:00 Intro
07:45 The history of scientific models of reality: structural, mathematical and computational.
14:40 Late 2010’s: a shift to computational models of systems.
20:20 The Principle of Computational Equivalence (PCE)
24:45 Computational Irreducibility - the process that means you can’t predict the outcome in advance.
27:50 The importance of the passage of time to Consciousness.
28:45 Irreducibility and the limits of science.
33:30 Godel’s Incompleteness Theorem meets Computational Irreducibility.
42:20 Observer Theory and the Wolfram Physics Project.
45:30 Modelling the relations between discrete units of Space: Hypergraphs.
47:30 The progress of time is the computational process that is updating the network of relations.
50:30 We ’make’ space.
51:30 Branchial Space - different quantum histories of the world, branching and merging
54:30 We perceive space and matter to be continuous because we’re very big compared to the discrete elements.
56:30 Branchial Space VS Many Worlds interpretation.
58:50 Rulial Space: All possible rules of all possible interconnected branches.
01:07:30 Wolfram Language bridges human thinking about their perspective with what is computationally possible.
01:11:00 Computational Intelligence is everywhere in the universe. e.g. the weather.
01:19:30 The Measurement problem of QM meets computational irreducibility and observer theory. 
01:20:30 Entanglement explained - common ancestors in branchial space.
01:32:40 Inviting Stephen back for a separate episode on AI safety, safety solutions and applications for science, as we did’t have time.
01:37:30 At the molecular level the laws of physics are reversible.
01:40:30 What looks random to us in entropy is actually full of the data.
01:45:30 Entropy defined in computational terms.
01:50:30 If we ever overcame our finite minds, there would be no coherent concept of existence.
01:51:30 Parallels between modern physics and ancient eastern mysticism and cosmology.
01:55:30 Reductionism in an irreducible world: saying a lot from very little input.

References:
“The Second Law: Resolving the Mystery of the Second Law of Thermodynamics”, Stephen Wolfram
“A New Kind of Science”, Stephen Wolfram
Observer Theory article, Stephen Wolfram
Рекомендации по теме
Комментарии
Автор

One of the most important thinkers of his generation

colinadevivero
Автор

Pockets of reducibility through course graining is a result of patterns being revealed through scale invariant transformations.

Like looking at a bunch of random colo noise, sums to the color white (or black) when blurred or zoomed out (far away) the aggregate information create a homogenous or other emergent pattern.

When looked at in the context of NKS, homogeneity is just one of the other 4 classes of behavior. Hence why there’s an infinite number of them: every scale produces any of the 4 behaviors (and combinations therefof) and it goes on forever at all scales.

So in general there will always be these pockets of reducibility where things can get described approximately by equations…in the same way that a bunch of random colors can be described in aggregate as “white” at a coarse grained scale.

The invariance going on here (the fact we experience both reducibility and irreducibility) is because the two are duel to an equivalence relation (computational equivalence).

That’s why his work is such a big deal…and why nobody has really thought through it because consider that science ONLY ever cared about equations (the reducible) rather than the irreducible which has tended to discard…when in the framework they are unified…the same thing. Wolframs work is modern day equivalent to E=mc^2.

NightmareCourtPictures
Автор

I think the disputes about reduction-ism and materialism are red herrings. IMO the underlying dispute is weather or not we can in-principle understand things or at=re their things that cannot be understood in-principle. In other words - are there supernatural things that even in-principle we will never understand. And generally scientific outlook says no. And even if the ultimate reality is not understandable because once can always ask the the question why? And definitely no for relatively simple things like consciousness, self and life.

SandipChitale
Автор

I know its half-baked but this all sounds very similar to Terence McKenna talking about hyperspace during the psychedelic experience and the universe as a novelty-complexity engine that updates itself into higher and higher levels of complexity, and runs on "language" or what Stephen is calling programs. Then the Ruleiad Woodbury at McKenna called, in a much more Messianic way, the transcendental object at the end of time

skihik
Автор

It may be that even though the multiple quantum history branches are being evaluated at the same time, there are only specific way points in that evolution of multiple quantum history branches at which things align in such a way that, only at those way points the perception can occur and one of the characteristics of these way points is that the multiple branches have merged to only let us perceive one outcome of the measurement. And that is why we (or our measuring devices) - who are embedded in this brancheal space - never perceive (or measure) a simultaneously dead or alive cat. So in some sense a measurement can be thought of as a brancheal network update that forces and hastens the arrival of such way points. Think of it like this....a key can enter a key hole of a lock only when the internal tabs are pushed out in a specific way to let the key through. Or another example is when travelling on a city road, only when we arrive at a crossing we can see the side street buildings.

The very phenomenon which we call a perception - can only happen at the way points in the previous paragraph. And in fact there could be other network updates which may force these way points and force the merging. That is why quantum computers are protected from stray air molecules lest they destroy the quantum state. BTW the last sentence also debunks the myth that consciousness is necessary to collapse quantum wave function.

SandipChitale
Автор

Think of branckeal space as the first derivative of ruleal space and our regular space is a first derivative of branckeal space or second derivative to ruleal space.

SandipChitale
Автор

at least there are pockets of reducibility .. so that is good

danellwein
Автор

Computationally bounded == Not Laplace's daemon. The secret of apparent free will is also rooted in the computational boundedness of us as observers. Thus the true libertarian free will does not exist. Our free will only appears to be a libertarian free will to us because we are computationally bounded observers and coarse graining of goings on.

SandipChitale
Автор

Oh No it's Wolfram - I need a really strong coffee before sitting through this one.
I am still trying to determine what proportion of Wolfram Nonsense is actually true and useful.

PetraKann