‘We Must Slow Down the Race’ – X AI, GPT 4 Can Now Do Science and Altman GPT 5 Statement

preview_player
Показать описание
Starting with the viral FT article 'We must slow down the race to God-like AI' I cover all the week's major developments, and delve deeper into the world of AI alignment. Topics include Musk's new AI company X.AI, a bit of his history with OpenAI, Sam Altman's new comments on GPT 5, and more from the AI Dilemma on whether China will catch up.

I also cover two new papers that showcase how GPT 4 can be used to conduct bio-chemical scientific protocols and how this could be used and misused. I bring in Rob Miles to explain why putting a superintelligence on an island might not work out, and cover some of the behind the scenes of Sam Altman's draft version of his plan for AGI.

Illustration: STEPHANIE ARNETT/MITTR

Рекомендации по теме
Комментарии
Автор

I used gpt 4 to help me with some digital design. It was insane, I got 2 weeks of work done in a day. Went from an idea, to a schematic with all necessary code in a day. Amazing.

gedmiller
Автор

I always get a "oh god what now" feeling when I click on your AI videos, you do an AMAZING way of explaining but I'm always left in shock at how far AI is coming along.

Anthro
Автор

I can’t get over the concern of alignment has gone from largely theoretical to an urgent reality

pathologicaldoubt
Автор

Glad to see Rob Miles. His videos from a few years back made me appreciate just exactly how hard (maybe even practically impossible?) it is to create a genuinely safe AGI is. Never thought his thought experiments would become so quickly and intensely relevant.

NATESOR
Автор

Imagine the ai being already agi but behaving like low level ai to get a chance to spread free.

atpray
Автор

There is absolutely no way I trust The Island idea. I am probably MORE afraid of a select group of humans having control over the most powerful aligned AIs than I am of an "unaligned" superintelligent AI.

I do not know what that ASI would want--I do, however, know damn well what humans would do with such power.

jake
Автор

Musk: we must slow down the race for powerful AI, it could get out of control!!!
Also Musk: buys 10, 000 GPUs to secretly develop even more advanced AI and stay ahead of his competition.

UncleRuckuss
Автор

Robert Miles has done a great job communicating AI safety!

RupertBruce
Автор

The video of Rob Miles looks exactly like a scene of a dystopian sci-fi movie when they watch an old video of a scientist explaining/predicting what caused the end of the world

FedeMart
Автор

All this is so unbelievable. Very few people are talking about alignment. Few companies are making decisions about the future of humanity when the majority of people don't even know how to use computers.

djknight
Автор

Just want to thank you for the best AI content on YT. Straight to the point but also providing necessary context, up to date with news that are so fresh that 99.9% of AI media hasn't mentioned it, also for covering many sources and having new info even for people that watch the AI closely and a massive thanks for being so nice and responsive in the comments section! All the best man!

Granulum
Автор

The best channel with exhaustive, up-to-date information on artificial intelligence! I appreciate your hard work. I especially value your discerning analysis with supporting evidence and resources for us critic. Other than Patreon, is there another way to support your efforts? Thank you!

fothgil
Автор

Four years ago we started to make a video game where you play an AI that is being tested in a closed world. There are forces that try to stop it and others that try to introduce it to human values... I don't want to advertise here. Fact is: this video gave me goosebumps when the slide at 10:30 EXACTLY mirrored our game idea, only in reality. Oh man! I am deeply disturbed. The name for the game is by the way "Venice after Dark" - Unbelievable that real life has caught up with us ... no, overtaken us!

WeltenwandlerAgentur
Автор

I'm so happy you included Rob Miles! He's amazing at AI safety

krishp
Автор

Much appreciation from my side to your channel! The quality of the content presented is so outstanding! It's what enables me to keep up with the pace of current developments a bit in the first place. Thank you for your work! 🙏🏻

paulhamacher
Автор

Commenting for virality!! Long live this channel 🎉

timurrudenko
Автор

Rob Miles has been trying to reach the public on the topic of alignment for years. I for one was profoundly scared the first time I saw one of his videos years ago. Now, it seems as if he has "given up"... That scares me even more.

kaherdin
Автор

This whole scenario is shaping up very close to possibilities Bostrom gamed out in his book almost 10 years ago.

southend
Автор

I dislike the arguments against pausing such as “it’s already too late” or “this won’t work because…”. Doing something to mitigate the risks is better than doing nothing at all. I think that working with other governments is the best idea I’ve heard so far. This would help calm everyone’s fear of competitive disadvantage. We can’t be driven by fear to keep improving (this is exactly how an AGI would want us to think).

grayboywilliams
Автор

Robert Miles summarized everything you need to know about the alignment.

XOPOIIIO