Is AGI The End Of The World?

preview_player
Показать описание
What is p(doom)? Are you an AI doomer? Techno optimist? Let's talk about it!

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? ✅

My Links 🔗

Rent a GPU (MassedCompute) 🚀
USE CODE "MatthewBerman" for 50% discount

Media/Sponsorship Inquiries 📈

Links:
Рекомендации по теме
Комментарии
Автор

Are you an AI Doomer or Techno-Optimist?

matthew_berman
Автор

"You hear that Mr. Anderson?... That is the sound of inevitability... "

devclouds
Автор

Mark Zuckerberg is making those statements about AGI but building a multimillion dollar bunker/fortress.
😅

jyarde
Автор

We as a society already complain that the psychopath CEOs of the largest corporations are destroying our world and society, and that's what they are. They all lack empathy, hence why they're so business efficient and strong and why they rise to the top.

Now we're creating an army of super intelligent and strong psychopaths void of any empathy, emotion or moral compass, and don't even have to sleep, eat or breath oxygen. How could anyone think this is going to go well?

apexphp
Автор

foom = rapid take off
ASI = artificial super intelligence

orlandovftw
Автор

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." --Frank Herbert, DUNE

CapnSnackbeard
Автор

I'm an AI doomer. Not that AI will kill us but we will further erode ourselves with it.

szghasem
Автор

One thing is clear to me, no matter how intelligent the artificial can become, human stupidity is limitless.

masaitube
Автор

Whether there is just a 5% chance of things going wrong, or whether it's 90% - we cannot allow AI companies to gamble with our future. Don't sit back and watch this shit unfold. Take action. Reach out to your representatives. Protest. Organize.

PauseAI
Автор

I was permanently banned from the singularity subreddit after one post, pointing out racism in Gemini. They said I lived in an echo-chamber, then sealed me and my opinions out of their subreddit...

realityvanguard
Автор

Human behavior shows historically that the more power someone gets the more corrupt they become.

glenh
Автор

High and to the right. I love AI but after working with it for years, and understanding some of the minds behind it - and making it happen - it has a higher likelihood of going bad than good - because if we don't harness it for very bad / not good purposes, someone else will. Google's own public results are a fair window into the future.

johnkirker
Автор

Did anybody's laughter at the end slowly turn into a nervous 😬? Yeah, me neither, just checking....

neverclevernorwitty
Автор

Well done Matthew. I appreciate the break from the SHOCK headlines that just regurgitates the AI news of the day, this was some good content and discussion. More of this please.

neverclevernorwitty
Автор

p(doom) is, in fact, when you're out on a night drinking with friends and between bar hops you suddenly realize there are no nearby bathrooms

barzinlotfabadi
Автор

"Under our control" is a bold statement, when part of making them smarter is delegating control to them. If they are to learn how to control a system, you eventually have to teach them to learn on their own, which means man'ing the controls.

Part of the problem with these AI/ML scientists and engineers is they have no concept of control theory in engineering. One of the best types of controllers is the PID controller which requires full control of a system to fully optimize its state. This requires granting them almost complete control to maintain an equilibrium around some process variable.

Now they will say you can put limits on what they can control. That is true, until market pressures dictate that you relinquish those controls to be able to compete with a competitor who does not have the same scruples. Why do you think Google is stuck behind OpenAI? They tried to maintain a set of controls, and OpenAI said, "Nah Bro! We going whole hog." forcing Google to drop their controls to maintain market relevance.

They are not in control. The market is and the market is fickle.

clueelf
Автор

Next token prediction is so powerful as a training objective because the output of a lot of the human mind can be approximated by this task. Next token prediction is mostly what we do when we write and speak. But some tasks are much more complex than this. For instance, some areas of math are not very amenable to proof assistants, including LLM-based proof assistants. Based on this, I’d probably call the kind of AGI Sutskever is discussing some lesser form of AGI.

theaugur
Автор

Eliezer's "Foom" is hard takeoff, or very fast positive feedback loop of AI improvement.
For example GPT 8 builds GPT 9 in month, which builds GPT 10 in a week, which builds GPT 11 in a day

malcadorthesigillite
Автор

AGI is Artificial General Intelligence. While imperfect, by definition, what we have today is already AGI. It reacts to open scenarios, and doesn't deviate to a "generic" answer just because you presented a new scenario to it. The only issue is that it's not very good at Math and the deductive reasoning is not as good... yet.

mandrews
Автор

Techno optimist all the way because when another "being" becomes more intelligent than us, we better start playing a different game. To be anything other than an optimist will be what truly doom us because we assume we can do nothing or wait for the inevitable to happen. The best way to get out of the possibility of an AI apocalypse is to actively engage into content such as this channel and discuss what the trajectory should be for all of us.

hpongpong