What We Get Wrong About AI (feat. former Google CEO)

preview_player
Показать описание
AI is here. ... Okay, now what?

Everyone’s talking about artificial intelligence and machine learning. There are lots of names thrown around - OpenAI’s ChatGPT and DALLE-2, Google’s Bard, Meta’s LLaMa 2, Midjourney, AlphaFold…

Right now, we’re in this weird moment where lots of smart people agree we’re on the cusp of a truly world-changing technology. But some seem to be saying it's going to lead to human extinction while others are saying it’s “more profound than fire.”

But it all feels so VAGUE.

I want to know: How specifically would AI kill me? Or totally transform my life for the better? In this video, that’s what I’m going to try to learn. We dive into what the most extreme bad and good AI futures actually look like, so that you and I can get ready. And more importantly, so we can make sure that we get our real future RIGHT.

Chapters:
00:00 Why is AI so confusing?
1:13 What is AI?
2:37 Why is everyone talking about AI now?
4:13 Thank you Milanote!
5:12 Why is AI dangerous?
6:22 How would AI kill me?
8:08 Should we pause AI?
9:12 Why do we WANT AI?
10:30 What has AI already done?
11:27 Why is AI so hard to talk about?

Bio:
Cleo Abram is an Emmy-nominated independent video journalist. On her show, Huge If True, Cleo explores complex technology topics with rigor and optimism, helping her audience understand the world around them and see positive futures they can help build. Before going independent, Cleo was a video producer for Vox. She wrote and directed the Coding and Diamonds episodes of Vox’s Netflix show, Explained. She produced videos for Vox’s popular YouTube channel, was the host and senior producer of Vox’s first ever daily show, Answered, and was co-host and producer of Vox’s YouTube Originals show, Glad You Asked.

Additional reading and watching:

Gear I use:
Camera: Sony A7SIII
Lens: Sony 16–35 mm F2.8 GM and 35mm prime
Audio: Sennheiser SK AVX

Music: Musicbed


Welcome to the joke down low:

This time, I asked GPT4 for an AI related joke. They were mostly TERRIBLE. Like:

"Why don't AIs play hide and seek?"
"They always find you in 0.001 seconds!"

Finally, it gave me:

"Why did the AI go to the gym?"
"It wanted to work on its 'training set'!"

… Good enough.

Use the word “training” in a comment to let me know you read to the end :)
Рекомендации по теме
Комментарии
Автор

I remember a lot of the recent AI milestones were described as "perpetually 10 years away". It feels so strange it's now upon us.

johnchessant
Автор

The Google CEO saying that he wants AI research to go ahead just so China doesn’t get there first is exactly like the arms race all over again, if not more dangerous. I don’t think anyone’s saying we shouldn’t develop AI in the future, I think we just need to understand what it can do and how to control it first

lizzielwfrancis
Автор

I don't fear AI. I fear humanity.

cybersecuritydeclassified
Автор

The scariest part in the whole video for me was the fact that ai that would dominate the whole worlds systems would either be based on american values or Chinese values. Either is equally scary.

prabinpaudel
Автор

Cleo's enthusiasm is addicting 😍
She could be talking about dirt and make it sound ultra exciting 😁

DrHosamUS
Автор

What surprises me is that the risk of AI pushing millions of people into unemployment, and the subsequent social/economic impact it could have, is barely talked about.

wltrlg
Автор

"We can't pause AI because we need to give it our political values" - the most terrifying thing I've heard in a long time.

Mrcloc
Автор

As a tech guy, I am constantly asked about AI and what it can do.

I am just going to send this video as a primer for people now.

This is fantastically done

davidhine
Автор

I just want a talking refrigerator named Shelby

Alexthe_king
Автор

As always, a well balanced and honest look into something that’s very confusing. Love this show!

thatllwork_official
Автор

I love Cleo's take on journalism: Optimistic but not naive! It is not only informative but also inspiring! ❤

maxilin
Автор

I would love to see how AI can assist with research with diseases such as Parkinson’s disease or MS

TheSurfRyder
Автор

There's an important point that this short video _almost_ touches on but doesn't explore, and it's one of most serious dangers of AI. Cleo mentions that AI gets to the result but *we don't understand how* it did. What this means is that we also don't understand the ways it catastrophically fails, sometimes with the smallest deviation applied to its inputs. An adversarial attack is when you take a valid input (a photo for example) that you know the output for (let's say the label "panda"). Then you make tiny changes in just the right places, in ways that even you can't see because they're so small on the pixel level, and now the AI says this is a photo of a gibbon. Now imagine your car's AI getting confused on the road and deciding to veer off into traffic. I hope Cleo covers this, because it's really important. To learn more, look up "AI panda gibbon" online and you'll find images about this research.

desmond-hawkins
Автор

Love your reporting Cleo. The enthusiasm and optimism you bring into your videos is contagious!

its.lorence
Автор

I can't help but compare - especially upon watching Oppenheimer - the creation of nuclear weapons to the creation of AI. Both are double edged swords (nuclear powerplants), could be dangerous and the reasoning is always if we don't do it, somone with worse intentions will.

MarkWilliam-co
Автор

AI itself never looked like a bad thing to me, it was always the way people used it that looked troubling. For example how some use it to create "art" by training the AI with images that they had no legal right to use. Over all AI can be an amazing thing it's just that with it's development we should also have new laws so it can't be misused, at least in ways that we know of.

_eev_
Автор

The metaphor with the trolley problem is flipped. We are straight headed into one AI future, and would have to steer really hard, if we want to avoid one.

tristanwegner
Автор

I think the greatest safeguard to the unintended consequences of AI is to limit what it has access to or the things it can physically influence. For example while it studied the patterns of human proteins and made predictions it didn't bio engineer humanity as it only had access to its own simulation and could only physically influence computer screens for display.

roscoeluekenga
Автор

Cleo, great video! You explained so many complex things in a simple, straightforward way. I'm glad you explained outer alignment: "you get what you ask for, not what you want." However, I was a little disappointed that you didn't cover inner alignment. If you punish your child for lying to you, you don't know if s/he learned "don't lie" or "don't lie and get caught."

AI safety researchers trained an AI in a game where it could pick up keys and open chests. They rewarded it for each chest it would open. However, there were typically fewer keys than chests, so it made sense to gather all the keys and open as many chests as it could. Which normally wouldn't be a problem, except when they put it in environments with more keys than chests, it would still gather all the keys first. That's suboptimal, not devastating, but it demonstrates that you can't really tell what an AI learned internally. So AI might kill us because we didn't specify something correctly, or it might kill us because it learned something differently from what we thought. Or it might become super-intelligent, and we can't even understand why it decides to kill us.

givemeaworkingname
Автор

Congrats on 1m! And you are nearly at 1.1m already! You are honestly one of not my favourite creators since your time at Vox glad to see you have success!

ikinloch
visit shbcf.ru