Why Math isn't Everything: Kurt Gödel and the Incompleteness Theorems

preview_player
Показать описание
Kurt Gödel, at the age of 25, demonstrated a hole in the very foundations of mathematics with broad philosophical implications. Basically, our understanding of the universe is NECESSARILY incomplete, because any system with more complexity than arithmetic is necessarily inconsistent or incomplete when dealing with certain self-referential statements.
Рекомендации по теме
Комментарии
Автор

That’s actually one of the best explanations I’ve heard. Well done

fluxpistol
Автор

I don't think Godel broke math I think he elevated it. He proved that Math is so infinitely complex that it can not be wrapped around a simple list of Axioms. Only an infinitely complex subject can be used to describe an infinitely complex existence. It's still self evident everything can be described mathematically, and it's only more self evident the deeper we go.

kchannel
Автор

Now THIS is one of the best, and most succinct, explanations of Godel's Incompleteness Theorem I've come across.

ryangarritty
Автор

Jacques Barzun wrote 'math is a language much like Latin or Greek was to the humanists of the Renaissance, ' (not an exact quote). In the future, mathematicians will rule the world, Barzun said, able to deal with the complexities which others of us eschew and long for more primitivism. Quality and quantity can be paradoxical, but that's a Pandora's box, or so I think.

rockyfjord
Автор

Gödel, my hero! Now I don't need to learn math! 😱 sudden realization that we need to brace for whatever supersedes math...

tudogeo
Автор

*Warning: massive comment with significant hypothesizing and lots of jargon*

I'm a huge logic nerd, mostly because I enjoy math foundations and inventing programming languages, so Gödel's incompleteness theorem is one of my favorite topics to discuss and ponder when I have the chance.

In particular, I have a growing suspicion that the theorem, whilst correct in its full context, is not all there is to be said on the topic of completeness, termination, and consistency. In particular, the first-order logic Gödel was working with is intentionally over-powered in order to be useful, just like programming languages allow programmers to make mistakes because ensuring they don't make mistakes would make it impossible to write other useful programs. The class of mistakes are different, but they're still there.

Classical predicate logic (which is what you need to prove things about arithmetic) actually allows near-unrestricted recursive definitions, which Gödel uses to great effect. It's just as easy to define "This statement is false" and work with it (it doesn't cause as many issues but it's still wierd) as the statement "This statement is unprovable". These unrestricted recursions are fraught with problems though. The undecideability of evaluating untyped lambda calculus terms or the halting problem is exactly the same type of recursion-caused issue.

There's also some weird asymmetries in classical logic, some things come in pairs (and/or, universal/existential quantification), others stand alone (implies and well-founded recursion). One of each of the pairs are nicely defined by how to construct their proofs (or / exists), and the partner just feels clunky (and / all). Way beyond the scope of an already massive youtube comment, but these pairings are represented most elegantly in category-based logics. It turns out that every logical connective comes as a pair, with the two missing ones being the "constructive" partner to implies, and the "destructive" partner to well-founded recursion.

The fun part is that the "constructive" partner to implies is roughly "is not necessarily implied by" or "converse nonimplication". It's proven by constructing a member of the right hand side for which the left hand side does not hold. This allows us to explicitly describe "not provable" as a connective in the logic itself (instead of having to jump through a representaion like Gödel) by saying "P is not necessarily implied by True". There's only one "proof" of true, the trivial one, and so if there's at least one proof of True from which we can't get to P, then there's no way to get from True to P, which is what provability means. This is different from the "not" as defined by "implies False", as that asserts that any proof of P can be turned into a proof of False, which cannot happen by definition, so P must not have any proofs either. The latter is a stronger condition. (Turning P -> False into P </- True is actually an application of the Yoneda lemma, as a bit of trivia.)

On top of all that wonderful stuff, splitting recursion into well-founded (finite sized objects) and its partner (maybe infinitely sized but finitely splittable) actually dismisses Gödel's problem statement, since it is either ill formed (recursing through unprovability doesn't work), or you run out of inductive hypothesis when trying to use the original Gödel numbering proof.

This is immensely interesting because it hints that by making the system /less/ expressive and more restrictive, we can actually sidestep the incompleteness theorem because it doesn't have unrestricted recursion in it and so is less powerful than the first order logic Gödel was working with. Yet it still is powerful enough to contain and describe any arithmetic as well as any programming language or system of logic. It just won't be able to completely evaluate all of them because of things like the incompleteness theorem and the halting problem.

timh.
Автор

Cool 😎. I appreciate the book suggestions

tiffanyclark-grove
Автор

Goldstein's book is very understandable and contains interesting detours through history.

morgengabe
Автор

Why do mathematicians always claim all the time that Hilbert's geometry proves that you can have complete and consistant maths ? How can geometry not be based on numbers ?

En_theo
Автор

Isn’t all math based on axioms that aren’t proven or can’t be that should’ve been the tell from the beginning something up

NickolaySheitanov
Автор

Why do religious apologists not invoke Gödel's theorem? It seems to be their best argument against any atheist belief. (I also guess it would mean that agnostic is the only real right answer)

jamescalderon
Автор

Entire civilisation went into understanding maths
Kurt Gödel :- okay maths has tortured me when i was kid now time to take revenge

yecto
Автор

I know some people try to extrapolate this to LOGIC itself. That is when logic is self evident.

renzocoppola
Автор

It is a mistake to equate the universe or the objective world with reality.

rareword
Автор

Except now. When math is deemed racist.

spicemasterii
Автор

I keep hearing the story; but I havent seen anyone actually show the story. Does the system become stagnant? Or does the system fail to explain its operation? Or does the system fail to create the change it needs to progress?

anthonym
Автор

The best explanation I have seen thank you

paullavery
Автор

What Godel proved was that you can't use formal logic to tell if math is consistent, not that math itself is inconsistent. Because an equation like E=MC^2 is consistent like any tautological statement is. So, in a sense, this just proves that math is more basic than logic. Godel himself did not believe that math was inconsistent. He thought math existed platonically but humans were not smart enough to fully understand how. Some think that if he explored mathematics via tautology he may have come up with a different finding. So, it may be that you can't count the Pythagoreans out yet. :)

MentatMentor
Автор

please a follow up to my recent posts on (Goedel’s incompleteness theorem) the architecture of materiality and that of the realm of abstraction, the two structurally linked, which prohibits for formulation of conceptual contradictions, I present the following for critique.

After watching several video presentations of Geodel’s incompleteness theorems 1 and 2, as presented in each I have been able to find, it was made clear that he admired Quine’s liar’s paradox to a measure which inspired him to formulate a means of translating mathematical statements into a system reflective of the structure of formal semantics, essentially a language by which he could intentionally introduce self-referencing (for some unfathomable reason). Given that it is claimed that this introduces paradoxical conditions into the foundations of mathematics, his theorems can only be considered as suspect, a corruption of mathematic’s logical structure. The self-reference is born of a conceptual contradiction, that which I have previously shown to be impossible within the bounds of material reality and the system of logic reflective of it. To demonstrate again, below is a previous critique of Quine’s liars' paradox.

Quine’s liar’s paradox is in the form of the statement, “this statement is false”. Apparently, he was so impacted by this that he claimed it to be a crisis of thought. It is a crisis of nothing, but perhaps only of the diminishment of his reputation. “This statement is false” is a fraud for several reasons. The first is that the term “statement” as employed, which is the subject, a noun, is merely a place holder, an empty vessel, a term without meaning, perhaps a definition of a set of which there are no members. It refers to no previous utterance for were that the case, there would be no paradox. No information was conveyed which could be judged as true or false. It can be neither. The statement commands that its consideration be as such, if true, it is false, but if false, it is true, but again, if true, it is false, etc. The object of the statement, its falsity, cannot at once be both true and false which the consideration of the paradox demands, nor can it at once be the cause and the effect of the paradoxical function. This then breaks the law of logic, that of non-contradiction.

Neither the structure of materiality, the means of the “process of existence”, nor that of the realm of abstraction, which is its direct reflection, permits such corruption of language or thought. One cannot claim that he can formulate a position by the appeal to truths, that denies truth, i.e., the employment of terms and concepts in a statement which in its very expression, they are denied. It is like saying “I think I am not thinking” and expecting that it could ever be true. How is it that such piffle could be offered as a proof of that possible by such a man as Quine, purportedly of such genius? How could it then be embraced by another such as Goedel to be employed in the foundational structure of his discipline, corrupting the assumptions and discoveries of the previous centuries? Something is very wrong. If I am I would appreciate being shown how and where.

All such paradoxes are easily shown to be sophistry, their resolutions obvious in most cases. What then are we left to conclude? To deliberately introduce the self-reference into mathematics to demonstrate by its inclusion that somehow reality will permit such conceptual contradictions is a grave indictment of Goedel. Consider;

As mentioned above, that he might introduce the self-reference into mathematics, he generated a kind of formal semantics, as shown in most lectures and videos, which ultimately translated numbers and mathematical symbols into language, producing the statement, “this statement cannot be proved”, it being paradoxical in that in mathematics, all statements which are true have a proof and a false statement has none. Thus, if true, that it cannot be proved, then it has a proof, but if false, there can be no proof, but if true it cannot be proved, etc., thus the paradox. If then this language could be created by the method of Goedel numbers (no need to go into this here), it logically and by definition could be “reverse engineered” back to the mathematical formulae from which it was derived. Thus, if logic can be shown to have been defied in this means of the introduction of the self-reference into mathematics via this “language” then should not these original mathematical formulae retain the effect of the contradiction of this self-reference? It is claimed that this is not the case, for the structure of mathematics does not permit such which was the impetus for its development and employment in the first place. I would venture then that the entire exercise has absolutely no purpose, no meaning and no effect. It is stated in all the lectures I have seen that these (original) mathematical formulae had to be translated into a semantic structure that the self-reference could be introduced at all. If then it could not be expressed in mathematical terms alone and if it is found when translated into semantic structures to be false, does that not make clear the deception? If Quine’s liar’s paradox can so easily be shown to be sophistry, how is Goedel’s scheme not equally so? If the conceptual contradiction created by Goedel’s statement “this statement has no proof” is so exposed, no less a defiance of logic than Quine’s liar’s paradox then how can all that rests upon it not be considered suspect, i.e., completeness, consistency, decidability, etc.?

I realize that I am no equal to Goedel, who himself was admired by Einstein, an intellect greater than that of anyone in the last couple of centuries. However, unless someone can refute my critique and show how Quine’s liar’s paradox and by extension, Goedel’s are actually valid, it’s only logical that the work which rests upon their acceptance be considered as invalid.

jamestagge
Автор

Thank you so much!! This is the best explanation on the theorem!!

lorenzdantes