Taming Silicon Valley - Prof. Gary Marcus

preview_player
Показать описание
AI expert Prof. Gary Marcus doesn't mince words about today's artificial intelligence. He argues that despite the buzz, chatbots like ChatGPT aren't as smart as they seem and could cause real problems if we're not careful.

Marcus is worried about tech companies putting profits before people. He thinks AI could make fake news and privacy issues even worse. He's also concerned that a few big tech companies have too much power. Looking ahead, Marcus believes the AI hype will die down as reality sets in. He wants to see AI developed in smarter, more responsible ways. His message to the public? We need to speak up and demand better AI before it's too late.

Buy Taming Silicon Valley:

Gary Marcus:

Interviewer:
Dr. Tim Scarfe

(Refs in top comment)

TOC
[00:00:00] AI Flaws, Improvements & Industry Critique
[00:16:29] AI Safety Theater & Image Generation Issues
[00:23:49] AI's Lack of World Models & Human-like Understanding
[00:31:09] LLMs: Superficial Intelligence vs. True Reasoning
[00:34:45] AI in Specialized Domains: Chess, Coding & Limitations
[00:42:10] AI-Generated Code: Capabilities & Human-AI Interaction
[00:48:10] AI Regulation: Industry Resistance & Oversight Challenges
[00:54:55] Copyright Issues in AI & Tech Business Models
[00:57:26] AI's Societal Impact: Risks, Misinformation & Ethics
[01:23:14] AI X-risk, Alignment & Moral Principles Implementation
[01:37:10] Persistent AI Flaws: System Limitations & Architecture Challenges
[01:44:33] AI Future: Surveillance Concerns, Economic Challenges & Neuro-Symbolic AI
Рекомендации по теме
Комментарии
Автор

REFS:









































MachineLearningStreetTalk
Автор

To all those who can’t stand a voice offering caution regarding a foundational shift in our collective human experience, I have to ask where the blind trust comes from in the folks building multi-million dollar bunkers and space rockets. Are they planning an exit strategy or just doing it for fun? Call me pedestrian, but I like it here. Messy? You bet. Room for improvement? Sure. But to simply cede our autonomy to a handful of aggressive, business-casual capitalists with a penchant for sci-fi books is hardly empowered or optimistic thinking.

ericbrown
Автор

One critiques Silicon Valley and suddenly load of bots/ignorant fanboys appear in the comment section. Interesting. The view point of this channel has always been that LLMs are not advancing anywhere near to AGI. ChatGPT o1 does not solve at all the lack of a world model.

federicoaschieri
Автор

"People will see a single correct answer and assume that a system has correct underlying abstraction"

egor.okhterov
Автор

I don't understand such a strong criticism against Gary. His main points are true:
1. Current AI technology is fundamentally unreliable. Giving it more data won't solve that in principle.
2. Few large corporations are using AI hype to gain enormous influence.
3. AI is starting to cause societal harm and needs to be regulated.

Hexanitrobenzene
Автор

Most of his arguments seem to «push in open doors» in the sense of criticizing some basic misconceptions about LLMs, yet does not diminish the value of LLMs if this is understood. Like, it doesn’t have a “stable world model”. Well neither do I, nor would I want to, since no stable model could ever fit a dynamically shifting complex and strange world. All world models are at best extremely incomplete useful approximations. And why would an LLM need a world model similar to humans? It could have a very different approximate “world model” which could provide insight about patterns different from those humans perceive. Sure, LLMS are unreliable, as are we as humans. Again this is no major problem, since I don’t want the LLM to do all the thinking for me. I want it to complement and enhance and inspire my thinking, and there SHOULD be some weirdness and randomness to it. In specific cases where extreme accuracy is needed? Dont use LLMs! If you want perfect chess, use Stockfish. LLMs work best in a creative collaboration with humans where the humans make the major decisions, if you ask it dumb questions, you often get dumb answers, and if you ask profound questions you get profound answers. The real potential of LLMs are in enhancing human creativity and insight when working WITH humans.

GlennGaasland
Автор

Love the clarity of Marcus' thought processes and explanations over such a broad range of aspects of AI's trajectory. On the morality discussion...

I think the missing dimension to this discussion was the investment community as stakeholders and their typically "amoral" stance towards choosing their investment theses and all subsequent decision making. Maximising decisions for profit is always going to introduce moral dilemmas. As long as VCs, PE and funds are run by pepole who adopt an amoral stance motivated mainly by wealth accumulation (ie everyone in practice), there will be executives in AI companies who remain conflicted between their morals and their financial self interest.
And as Taleb says, only financial independence can free us from this conflict - Gary being another case in point.
In reality, entrusting the trajectory of AI to the morals of individual actors in the AI world is a fools game. Which is whey I think Gary is right that regulation is the only answer.

neurojitsu
Автор

Social media has definitely been a calamity, one I'm not certain we're going to recover from.

bujin
Автор

Strange negative comment section. Thanks to dr. Marcus and this YouTube channel for staying on the ground of the facts

faster-than-light-memes
Автор

It's good he realizes in retrospect that his actions in DC were naive.

heterotic
Автор

everything he said made perfect sense to me. I dont understand why people are getting so pissed. GPT is in fact shit and orders of magnitude dumber than me at accomplishing something

deep_AI_
Автор

The moral decline was about the time facebook came into existence. He's a little late.

itzhexen
Автор

I don't understand how fans of this show can be so hostile to Marcus, considering how much overlap there is between Gary's criticisms of AI and the views espoused on MLST.

deadeaded
Автор

Tim, I love that you’re using an actual paper notebook rather than a phone or tablet. 😁

fburton
Автор

This man can be wrong at the same time we are moving too fast. Don't let the message be killed by the messenger.

johndewey
Автор

what is interesting about GPT4 and chess is that given very simple rules on movement of pieces it still makes illegal movies as if it does not have a understanding of compositionality and systemecity but instead is just pattern prediction. Only with the ability to learn rules weather Pertrinets or other symbolic planning structures can it truly reason. Clearly if you asked the llm what are the rules of chess it would not have a problem to produce them so the disconnect is between knowledge and application of knowledge rules .

richardnunziata
Автор

This was filmed before openai o1 model, and already this episode has not "aged well". Its hard to watch an expert coder devolve into opinionated philosophy, this is the age of recursive code after all, and we need our elders to keep up.

astilen
Автор

1:17:55 elegant way to say that Elon lacks integrity. I think that lots of us can see that, and it's disappointing.

BrianMosleyUK
Автор

Thank you very much for a very commonsensical conversation about this new resource/tool!

MiaNoble-wc
Автор

Agree with many of his points, LLMs are impressive to humans and we are charmed by the emergent abilities,
but is from being founded on any fundamentals
but large bruteforce computations over data

boonkiathan