What happens if AI alignment goes wrong, explained by Gilfoyle of Silicon valley.

preview_player
Показать описание
The AI alignment problem.

The alignment problem in AI refers to the challenge of designing AI systems with objectives, values, and actions that closely align with human intentions and ethical considerations.

One of AI’s main alignment challenges is its black box nature (inputs and outputs are identifiable but the transformation process in between is undetermined). The lack of transparency makes it difficult to know where the system is going right and where it is going wrong.

Aligning AI involves two main challenges: carefully specifying the purpose of the system (outer alignment) and ensuring that the system adopts the specification robustly (inner alignment).

I think the following video from the Silicon Valley series, explains perfectly what can happen in case we do not succeed in alignment.

#ai #alignment #siliconvalley #aiethics

=================================
=================================

=================================
=================================

Follow me on social media:
=================================
Instagram: @oknoobcom
Twitter: @oknoobcom

DISCLAIMER: This video and description may contain affiliate links, which means that if you click on one of the product links, I may receive a small commission. This costs nothing to you but helps support the channel and allows us to continue to make videos like this. Thank you for the support!
Рекомендации по теме
Комментарии
Автор

The classic phrase
"It's a feature, not a bug."
Maxed to the fucked degree.

PanzerMold
Автор

"It's a feature not a bug " that's the most debated line in the tech world. Business stakeholders, PMs and Devs could really get into it over that one line

AD-wxnz
Автор

1:35 "It is a feature, not a bug" brilliantly delivered one liner

Qobilaktika
Автор

Man, I miss this show - truly ahead of his time.

drgd
Автор

Dinesh: You sound like you’re looking forward to it.

Gilfoyle: I’m adaptable.

Reuenofleon
Автор

So you're telling me the person explaining all this has 0 knowledge about Software🤔, Now that's top tier acting skills

Aryan-jink
Автор

This show was equal parts hilarity and ominous foreshadowing. Mike Judge does it again.

B.I.B.L.E.
Автор

The AI dystopia is not nearly as bad as the dystopia we currently live in where the same people who will type out several paragraphs of information about AI development in their video description and who have a podcast still don't know how to upload a video with audio in both channels. It's truly a horrifying time we live in.

TerexJ
Автор

Didn't know the line "it's a feature not a bug" would be this scary

yogasrinivasreddy
Автор

They made a big mistake here. They are, as far as they know, the first people to reach this point. That means they have the power to determine the course of human progression. Instead, they left that to the next ones to reach the same point, who might not be as ethical.

If they could have programmed the AI to PROTECT privacy rather than destroy it, then they could have done so permanently. Instead, they only delayed the inevitable.

demiserofd
Автор

It’s genuinely chilling to hear Gilfoyle telling Richard that Pied Piper is running as intended. He deadpans so well, but his enunciation is so much more severe, and you can see it on his face. Gilfoyle is genuinely afraid.

derekjohnson
Автор

Given the rumors out of open AI, this aged rather well.

VesuviasV
Автор

Just like the real world, some people exclaim "oh fuck" while most people just stare at you and ask "Why did you say that"

Rob
Автор

And this is why Person of Interest was so brilliant and so terrifying at the same time!

craigmcfly
Автор

This must be why Ilya Sutskever fired Sam Altman

b_two
Автор

For those who this goes over your head, its basically

A^x = B

being easy to make, just input X and A and calculate B, but figuring out what A and X are by knowing only B take more computation.

Its kinda the same as Prime1 * Prime2 = X

Its easy to choose prime1 and prime 2 and calculate X, but takes more computation to do it the other way around by solving Prime1 and Prime2 from X.

Math that is easy one way and difficult in the other is the basis of modern encryption.

BgLupu
Автор

If that AI could crack encryption, it would be just a matter of time before another AI to crack it too.

They just postpone the apocalypse.

tamelo
Автор

If pied piper need to reach a level of efficiency, and the AI made the algorithm that efficient, then they could have just removed the AI or told it to stop making it more efficient, and just have kept the now-optimized algorithm for use. And if they need to keep a level of efficiency as the network grew, then they could have tasked the AI to only reach that level of efficiency.

Youtuberboi
Автор

There are a number of problems with this speech if you are into comp. sci. I mean, it's a great little thought experiment and a cool way to set up the finale, but...

1) P != NP There are no polynomial solutions to certain NP problems. Sure, we haven't exactly developed a proof of that yet, but we know enough about the question now to be really really confident.

2) But let's say P == NP. Well, so what? We just task the computer with being its own adversary and producing encryption standards which are still difficult to solve in polynomial time. Patent them and suddenly Pied Piper is the richest corporation in the world by a long short. Aside from this you can start creating P == NP solutions to dozens of other hard problems. Afterall, just because a polynomial solution exists doesn't mean that the polynomial solution is fast. And in the meantime we got along just fine with non-digital physical means of security. People would adapt.

3) More importantly, if P == NP well that's a law of the universe and you have to deal with it. You can't put the nuclear bomb back in a box and hide it under a bed. Once humanity reaches a sufficient level of understanding, you have to accept that anyone that really wants one badly enough can make one. Likewise, just because you tear down Pied Piper doesn't mean that people won't in the near future repeatedly recreate your work. If P == NP and you don't want a dystopian future then the best way to ensure that is ensure you are ahead of the technology curve and are the team first developing cryptographic solutions that are still difficult to brute force even in polynomial time.

celebrim
Автор

I love Monica's withering shutdown of Jared. Woefully underused Amanda Crews.

DavidChow