Can Large Language Models Understand Meaning?

preview_player
Показать описание
Brown University computer scientist Ellie Pavlick is translating philosophical concepts such as “understanding” and “meaning” into concrete ideas that are testable on LLMs.

---------

- LIKE us on Facebook: / quantanews
- FOLLOW us Twitter: / quantamagazine

Рекомендации по теме
Комментарии
Автор

Giving LLMs 20 billion context parameters and telling it "alright tweak this until you speak contextually like a human", and then having them achieve that and us being "holdupwait" is pretty much the wildest development that could've been made for machine learning

MarcoFlores-pxmh
Автор

I think it's fascinating what something like AI can teach us about ourselves. It really exposed a wide mass of people to the topic. I never thought that we can mimic language and understanding so good "just" with statistics and lots of computation power.

windy
Автор

Not human level but not non-trivial.

Well said

davidcahan
Автор

I like this channel and most of its content, but this video just didn't really say anything other than NN neurons are different than actual human neurons and we don't understand how either work

danlacey
Автор

This video is about "understanding" and "meaning". I think you can make an argument that LLMs do encode generalized concepts from their corpus. However, if you think that LLMs are able to _reason_ at anywhere near human levels, you might have fallen for a misconception. I'd recommend Subbarao Kambhampati's lectures "Can LLMs Reason and Plan?" to help unpack that notion.

BrianPeiris
Автор

Maybe it's just me, but I heard an awful lot of words in this video that sounded "erudite" but didn't actually reveal anything at all... ?

brianquigley
Автор

"meaning is about what is in the actual world"--Look, I'm a programmer who has studied AI but also philosophy. I'm not saying Neuroscience is a bad place to look, but if you want to compare two things nobody really understands, you're not going to learn much. Philosophy of Language is just as worth while place to connect the links between these two fields because language is to some degree represented by text. If you want to do scientific investigation you shouldn't just assume what the nature of meaning is--this is literally an entire field. Read Wittgenstein, Heidegger, Derrida, Austin and many others.

michaelr.landon
Автор

What is experience of the "real" world except information? Sensory information on some spectrum, observed as "feelings", ordered temporally? What is a thought except concepts and words ordered temporally?
One big difference is that language models don't experience time, they get an input and give you an output and that's that. While the human brain is much larger and exists continuously, gets much more information input and has different aims.
But i can't be sure that they're that fundamentally different.

Apodeipnon
Автор

I think we overestimate human understanding of meaning, how many of us have had actual experiences?
Also as they say, a wise man learns from others' mistakes

kaushalsuvarna
Автор

Is being dump and try dump things a solution to problems because it might block contradicting or false assumptions and leads to intelligence therefore?

robertsteinbeiss
Автор

I find it so interesting listening to people speak of these questions from a certain philosophical background.

This scientist's perspective is that "words" are "less informative" than... something else in reality.

But the only way humans interact with the world is through information and conceptualizing of this information. In other words, something very analogous to words. (Connections drawn by words is another huge aspect of it.)

She also says "human level". As if this means something.

It would only mean something if we had some idea of what humans do and are actually capable of.

The jury is still out of whether we aren't just simply the 4th or 5th iteration of large language models.

Just like an LLM, I wasn't sure how this comment would end up, but after considering everything I aimed to communicate, this sentence seems like an apt ending to the entire post.

I find our ability to evaluate LLMs so strangely clouded by an innate bias of considering humans sacred and human reason as something more.

I cannot personally justify that viewpoint. Much of my own reasoning is simply a "remix" of the inputs I have received throughout my life. Just like an LLM.

jks
Автор

There are few steps that need to be properly acknowledged:
1. We need to clearly define these ambiguous terms of ‘meaning’, ‘reasoning’, ‘consciousness’ etc… when talking about AI systems.
2. In order to understand what’s going on internally, you need to understand not only the architecture but also the internals of it i.e., the weight matrices that are learned.
3. An alternative or follow on from part 2 is that we need accurately define experiments which demonstrate specific phenomenon we want to discuss e.g., if I we want to understand if LLMs can make decisions, we need robust and constrained experiments which force LLMs to do this and to explain why. This is something we are seeing much more now in research but we need to be better with what kind of experiments and not just build LLMs which score high on leaderboards/benchmarks.

sidnath
Автор

it understands meaning relative to most purposes and most basic meanings already

ldPlayer
Автор

The level of actual understanding of a “garbage-soup” model AI somewhat depends on our definition of “understanding” but can only be somewhere between undetectably low and very low (average 2 y.o. or severely demented) but the compost language model is still one of my strongest inspiration to create an AI that has the potential to actually understand things, eventually, so I must be thankful. It’s like the first loud-mouth in a Bud Spencer movie to receive a ridiculous slap and flies out of the window

idegteke
Автор

If you talk to your GF, she'll ask, "what did you mean by that?".
Ask CGPT that, and you break it. The typical BF.

carnsoaks
Автор

When I answer questions, I'm not thinking in the way humans do. I don't have thoughts, feelings, or consciousness. Instead, I process the input you provide based on patterns in the data I've been trained on, and I generate responses based on that processing. My responses are not the result of conscious thought or reasoning.

OBGynKenobi
Автор

When I was studying ANN for image recognition, there was a basic illustration that each neuron on the surface recognizes one specific line shape, the next has a list of figures with two lines, the next has a shape, the next adds color, and progressively you get this search web that can estimate what any given photo may be, without needing to re-train when a new category is added and the whole process can be intuitively visualized. I assume we can simplify language models to something similar.

Amonimus
Автор

The thing with recognition is tho to a large part say image recognition for microwaves in a 3D enviroment evironment the pipeline to acheive that isint that difficult since we essentially know and can model in a 3D to 2D space all microwaves that ever existed the performance enhancement for doing is probably billions of times better than simple 2d image recognition from a way smaller sample size of 2d images that were taken in 3d space IRL compared to the limitless images that can be computed in virtual 3d modeled enviroment.

patrickhendron
Автор

Dumb and smart describes most humans too.

bradweir
Автор

The AI model will eventually have all of human experience within it. It will be able to reference experiences maybe only one or two humans ever have. It will understand common human experience over millennia and how our norms and values have changed over time.

SoCalFreelance