Why The World Isn't Taking AI Seriously Enough

preview_player
Показать описание

Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is a co-founder of the Machine Intelligence Research Institute (MIRI) and a co-founder of Center for Applied Rationality (CFAR).

🎙 Listen to the show

Follow on Socials

About the Show
Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.
Рекомендации по теме
Комментарии
Автор

Hes the guy at the beginning of the movie who gets ignored but is ultimately correct

malik_alharb
Автор

I trust this guy. He speaks from the heart.

bbeans
Автор

The thing that surprises me the most about this advancing technology is that no one in a position of power is talking about how we will need a new economic system. That is if AI doesn't destroy us...

Recuper
Автор

He doesnt see the cup half empy but shattered into pieces

plumbo
Автор

'Eliezer has a dumb hat and weird facial expressions, therefore he must be wrong' - many people.

marklondon
Автор

Its naive to think states will not develop more powerful AI than GPT4 if they can, even if there was supposed to be a.moratorium. AI now has unstoppable momentum.

QuentinBargate
Автор

Good that you made a short chunk of this interview. I watched the whole thing. I watched Ross Scott's 'debate' with Eliezer, who thought it might be a good idea to ask his interlocutor to do zero homework first. It was a train wreck. I guess he was hoping to be convincing without three hours of deeply challenging explanation. Nope.

"It's the lack of clarity that is the danger". It's less clear and immediate than climate catastrophe and we aren't responding to that.

lshwadchuck
Автор

I miss Steve Jobs. He always wanted tech to serve humanity. Not the opposite that is happening today.

andybaldman
Автор

Eliezer has made a full time job of his alarmism YT world tour. For all his proselytizing about what other math and physics experts should be doing, he's one of the most discouraging voices in the room at all times, and doesnt seem to be doing much research himself, these days. He will literally say in the same breath that physicists should change their whole career course to deal with this, but that it wouldn't matter if they did because we're all too late and too dumb. Then he shrugs at people, like he's sorry to be the bearer of this immutable bad news. It's a shame, he's an influential voice for good reasons, but when he says he intends to go down fighting, this is not what that looks like. Every D-F science student who ever saw Terminator is on the internet shrieking about the impending AI apocalypse, and Eliezer chooses to join that cacophony by sarcastically mocking string theorists for focusing on the 'wrong field, ' while he puts out yet another YT guest spot crying doom. He's more than qualified to lead by example on this (and did, for years). He's smart and competent enough to literally work on proving and evaluating the mechanics of transformer systems, himself, but instead he uses his agency for this. 🤔

andydougherty
Автор

Google is busting a gut to create an AI at least 10x more powerful than GPT-4.

alertbri
Автор

Why isn't anyone listening? IS this real? What universe do we live in?

teugene
Автор

This is very doable after American lawmakers regulate large AGI training. International diplomacy is much more doable now than it was 100 years ago because of the speed of communications

robertweekes
Автор

Cue someone saying that he is just scared of new technology, which is absurd. Odd that numerous people who are at the heart of this field are running around with their hair on fire. The mindless optimism of many others is what is truly concerning, no one should be poo pooing the idea of brakes and a selt belt in favour of greater speed.

PrincipledUncertainty
Автор

they stopped using gpus awhile ago, they use ai accelerators; which are akin to asics.

mnemonix
Автор

While the dangers of AI--powerful tools in the hands of men--are very real, the solutions of Yudkowsky and his ilk invariably seek to centralize AI in the hands of governments. That's not a future I want.

trucid
Автор

they, the Countries, could sign the AI agreement under the Antarctica Treaty as they all seem to agree on that one js

jadedbludarling
Автор

The scenario is this. We developed these large language models -(some mysterious thing happens)-no humans left on planet

James-iptc
Автор

the reality is we won't stop. If we can do something, we will. The point is to put in as many safeguards as possible.

reedriter
Автор

Universal basic income, it's finally time.

jfutures
Автор

I think in 5 years or less you will be able to run the equivalent of GPT-4 on a normal computer. And a very high end PC may run something 5 times more powerful.

runvnc