Can computers understand?

preview_player
Показать описание
Can computers understand? That's the question explored in "Understanding Computers and Cognition," an interesting old book on philosophy and AI by Winograd and Flores. In this video, I walk you through the basic arguments and ideas.

Note: I recorded this when I was away from my usual studio last Dec/Jan, so my apologies on the lighting etc., if that bothers you.

MICHAEL'S NEWSLETTER

FREE INTRODUCTION TO PHILOSOPHY

RELATED COURSES

FOLLOW ONLINE

ABOUT ME
I teach politics and philosophy to professionals in law, education, finance, and tech through video courses and private tutoring. I also offer private advisory services to hedge funds, corporations, governments, and other institutional clients.
Рекомендации по теме
Комментарии
Автор

Michael, I studied under Dr. Flores at the time he and Dr. Winograd published this book. Thank you for the overview of this text and mentioning his subsequent books. The introduction of LLMs moved AI closer to embodying the "background of understanding" which proceeds formal logic. Four decades after publication these insights from Heidegger still provide guidance for understanding the possibilities, limitations and strategies for advancing AI.

thewjw
Автор

I have to express some bewildered appreciation for the youtube algorithm for putting me onto this channel. At first it tried to recommend me some type of AntiFa ideological content that felt like a flashback from a decade ago. I wasn't pleased, but after giving it what negative feedback I could, it then sent me here. I subbed in under an hour, absolutely top shelf content. Thats quite the turnaround!

blackjackking
Автор

The Heidegger specialist Hubert Dreyfuss' critique of AI is still relevant, i think.

prismismfilms
Автор

You should have a chat with John Vervaeke

the_wheelbarrow_of_pathos
Автор

damn those youtube algorithms. Your channel should be widespread in these times. Thank you, Mr. Millerman! If I may say so....i hope you keep it up!

peterfrank
Автор

There are plenty of ways to use propositional systems to create formal languages that account for context sensitivity, such as those proposed by Grice’s conversational maxims or Leech’s pragmatics. These frameworks can capture intention and interlocutory force by modeling implicature and context-dependent meaning. However, modern transformers go beyond such formal systems by leveraging vast amounts of linguistic data to anticipate context and nuance through statistical training.

In this sense, the linguistic "sensorium" that transformers model enables them to approximate intention and conversational force without explicit symbolic logic. This challenges the strict syntactic-semantic divide highlighted by Searle's Chinese Room argument. While the Chinese Room remains a compelling analogy for critiquing purely syntactic processing, the performance of transformers suggests that the line between syntactic manipulation and genuine understanding is more fluid than previously thought. The relationship now feels less like a definitive critique and more like a conceptual reference point for discussing evolving AI capabilities.

benjones
Автор

Dr. Isreal Kirzner's theory and lectures around Entrepreneurship and Entreaprerial alertness and discovery sound incredibly similar.

Oversimplified -- He uses an example in one lecture on YouTube of ingredients in a cupboard....one is hungry and doesn't think they have anything to eat yet they may look around and eureka everything is at hand to bake a pie.

southerntreeremoval
Автор

Recognizing that linear language doesn’t map well to the multidimensional world is a good area for investigation. Language should create a multidimensional discipline that bridges the gap between our multidimensional imaginative facilities and the multidimensional world we inhabit. I believe the structure of the flower of life was a way ancient civilizations organized information in a nonlinear form and should be adopted and reintroduced.

danskiver
Автор

Does the Chinese room fear death (or demolition)?

racoimbra
Автор

I was going to recommended Hubert Dreyfus, but someone beat me to it. There is apparently a fringe field called Heideggerian AI, but I haven't investigated it in depth. I was pointed to it by Chemero's introductory book on Phenomenology.

Jacob
Автор

I’ve really enjoyed your videos on books that aren’t well-covered by the mainstream, since I’ve also read many of them. Would you be interested in exchanging notes?

omanes_jgh
Автор

Excellent, Michael, thank you for the effort. Our being is always being beyond itself, is a way of rephrasing the whole thing and its possible implications, in a oneliner. Being in our being beyond our present state of being, as a fundamental human condition, alongside the three others Arendt elaborated on in THC, is a way of not saying how one always used to say it before, and that is that we ARE spititual beings and (considering Arendt and her conditions for the reality of real human power, or, the preservation of a true human public space) ecclesially destined (the term "church" came over from the classical greek political terminology), that is "always already" situated in a collective body of interelations that go way beyond this earth, this world an this era. Without any further ado: how could one possibly, algoritmitize, rationalize through instrumental rationality, this ontological depth to ANY human being? AI and LLM are one an the same ever better seemingly dressed emperor. But as Hegel got it, under the Emperor of any finite sort, only ONE is free, Nd there's no transcending of that truth neither, unless one reads some Church Fathers, on issues essential but long ago left out to inform our cleverness with meaning that could sanely moderste it. Love your channel man, Millerman it is.

guygeorgesvoet
Автор

To further simplify your decision tree scenario, there is compulsion. It can defy all the logic and reason in the world. I can program the rationale I believe a human uses into a computer program for it to make optimal choices, but in the end, I as a human being pick the blue Mini with a 5 speed that neither meets my needs or my budget, simply because I like it; and I choose it despite all the negative implications of such a choice that I am already well aware of when making that choice. Can you program an AI to be rational and at the same time make a purely emotional choice that makes no sense, but just feels good in the moment?

lonecanadian
Автор

I am building statistic model while listening to this vid...

mahlahlana
Автор

To create artificial X you first have to understand what X is and how it works. AI is a BS term, ML is the correct one.

galileo_rs
Автор

Humans don't understand anything either ;-)

williambranch