Can LLMs Clarify the Concept of God?

preview_player
Показать описание
"God as law, awe and fortune...

LLM analysis reveals that the concept of God is well-defined in semantic space by this three-word combination. It provides a closer match than, for example, Lord, Yahweh and Elohim."

In this video, I walk through the claims of the paper to see whether they deliver on the promise of clarifying our concept of God.

Why would we prefer the LLM to other ways of learning about God, like through the study of theology, mysticism, and philosophy? Is the LLM uniquely insightful?

Enjoy!

Рекомендации по теме
Комментарии
Автор

99 names of God 🫶 loving your videos, hello from Kyrgyzstan!

tolgonainadyrbekkyzy
Автор

“Truck and… guns?”
Lol. Damn right, brother.

whatwilliswastalkingabout
Автор

How about ineffable, ineffable, and ineffable? Now that would be something to ponder. But instead we're left with garbage in, garbage out.

IMLF
Автор

God have mercy = Law, God bless you = abundance, God help me = will

Decocoa
Автор

@13:46 Yeah you nailed it. Just because certain words cluster together in the LLM calcified (parametric) knowledge, which is a function of the totality of the text you feed it, doesn’t mean it’s a definition or even close. Adding or removing text will alter which words cluster/coalesce. This is paradigmatic of using LLMs and reading way too much into it. They are sub-ordinate to our language but they do not accurately represent what our knowledge conveys.

Decocoa
Автор

@2:31 The concept of Prince isn’t being ‘understood’. The co-ordinates the model ‘learns’ and ‘places’ them in this abstract co-ordinate field (which is higher than 3d), is more of a function of the corpus of the many sequence of words the model is trained on. Within its training data, prince can also appear in other contexts next to other words like Prince (the musician), Prince of Persia etc. So whilst boy+heir+king should be equal to prince, nothing is being learnt-it’s merely being memorised and calcified into the model. Only if you fed the model training data in which princes were refered to as boy, heir, and to the king explicitly, then the co-ordinates will be pretty much as close as they can get. But you’ve essentially spotted that under the hood how these things are really representing the knowledge they’re fed and how they respond into being queried. There isn’t any reasoning being done. In fact there is no reasoning mechanism. Only interpolation is occurring

Decocoa
Автор

@11:36 These models are language models after-all. Not knowledge models with all the reasoning maps one needs to deal with novelty and create new knowledge to adapt to said novelty. They model language-which one can exaggerate to mean that they can model a human mind. Hence the polemic used there at the end. Just my thoughts :)

Decocoa
Автор

In The Brothers Karamazov, Ivan’s Grand Inquisitor claims humans can satisfy their faith in God when they are allowed to have miracles (fortune), mystery (awe), and authority (law). Seems Dostoevsky had the best linguistic understanding of what we think of when we talk about God and faith no?

ThFallen
Автор

Peterson's formulation reveals his misstep:
You simply can't "define" the sacred in human language

balderbrok
Автор

Guenon and Evola's critic about science (in "The Reign of Quantity..." and "Ride the Tiger") provide an answer to this issue.
LLM can't be of much help in the research for God or Truth because the analysis is biased by its quantitative approach and by the methodology of basing the results on some sort of statistic data coming out of a corpus of texts. It may shed some light on a bunch of other things which are human, all too human.

SalvoColli
Автор

These are exactly the questions we should be asking.

Worth looking into how vector spaces behind LLMs nail "understanding" beyond the limit of language.

NessieAndrew
Автор

Imagine if Terry Davis was still around

urbrandnewstepdad
Автор

I just realized that the LLMs use the same reasoning behind the Torah codes.

RoyalistKev
Автор

The God Vector is a kickass name for a novel.

Smegead
Автор

I don't really get what Jordan Peterson is up to but they say God meets you where you are at and I don't pretend to know where Jordan Peterson is at in his head space. In my experience I sometimes wonder how to even reach some people on this topic because I'm reminded of the saying you can't fill a full cup. A lot of people's conversion to God is first preceeded by their cup being knocked off the table. It's written God chastised those He loves and so getting your cup knocked off the table is something we try to avoid but may be exactly what God wants for us before He can work with us.

matthewgaulke
Автор

There is no meaning from a machine without human observation.

As Mr. Peterson says "God is the point that our sight meets our consciousness"

martenscs
Автор

Bias here in this context refers to the training of the LLM. Those vectors in space and their proximity to each other are built and tuned during training. Depending on what text you train on you could get very different vectors.

__napcakes__
Автор

In machine learning, the word "bias" has a well-established meaning. Although perhaps in the context of a not entirely scientific article, one of the everyday meanings of this word was meant - e.g. in the sense that both corpora are not complete and therefore provide a different ontology, also incomplete. Or even prejudice regarding race, gender and so on, this is a hot topic in language models, if you know what I mean)

As for the generally accepted academic meaning - the presumption is that there is some function whose product is the data, and we select an objective function that is as close as possible to this unknown. Since there is a limited amount of data, and data can be noisy (inaccurate measurements), we can never be absolutely sure that we have found the very ideal function, but we can check how it can predict points that were not used in training (selection of the target function ).

roughly speaking, we have several points, and we want to get an equation whose graph will pass through these points. we can choose a complex formula that will pass **exactly** through all these points, but if after that we look at some other points that we saved for the test and did not use during training, it turns out that our function does not pass through them at all, not even close. this is a case of low (or even no) bias and high variance (bad choice). we can find a very simple formula that will work equally poorly on both the training and control dataset data (high bias, relatively low variance, also bad, low predictive ability)
If we’re lucky, we can find a formula of average complexity that will pass **approximately** through the points of the training dataset, and **approximately** through the points from the test dataset (low bias, low variance) - this is a good result.


But I got the impression that the article is talking about the fact that the training was carried out on a biased corpus, which, however, leaves open the question of how the authors imagine a non-biased corpus...

NA-diyy
Автор

Although western civilisation appears to change, the inability to separate God and man remains:

- Greeks: Gods behaving like humans
- Christianity: God becoming a man
- Neo-liberalism: Humans becoming Gods (determining the law)

Whereas, Islam succeeds in the clear separation between God and man.

Can’t remember where I read this.

y.v.
Автор

LLM’s are showing that the collective mind is doing more then just expressing with language, it’s also trying to solve the puzzle of the human condition. It’s hard to recognize this because we only use linear language and the puzzle of the human condition is multidimensional.

danskiver