What Happens When AI Goes Wrong?

preview_player
Показать описание
The REAL Reason People Are Scared of AI

Artificial Intelligence is advancing at a pace faster than anyone could have previously predicted. Legislators across the planet race to keep up and protect us from what they refer to as ‘Nightmare Scenarios’ - Here are 6 of those situations.

Special thanks to Carme Artigas, Spanish Secretary of State for Digitalisation and Artificial Intelligence.

-- VIDEO CHAPTERS --
00:00 Intro
02:52 Predictive Policing
06:02 Elections
09:12 Social Scoring
14:57 Nuclear Weapons
18:32 Critical Sectors
24:12 Optimist’s Take
25:15 Credits

Correction:
1:19 We misspelled "Python" here - oops!
14:56 Misspelt, should be "Nuclear"

About:
Johnny Harris is an Emmy-winning independent journalist and contributor to the New York Times. Based in Washington, DC, Harris reports on interesting trends and stories domestically and around the globe, publishing to his audience of over 5 million on Youtube. Harris produced and hosted the twice Emmy-nominated series Borders for Vox Media. His visual style blends motion graphics with cinematic videography to create content that explains complex issues in relatable ways.

- press -

- where to find me -

- how i make my videos -

- my courses -
Рекомендации по теме
Комментарии
Автор

He forgot about two important factors: greed and lobbyists

testsubjectno
Автор

One of the researchers I watched said something that stuck with me so much “The view is more beautiful the closer you get to the cliff”

thecharredremain
Автор

AI methods are taking over Youtube money. After joining “Mastering the AI Money Game” book Its feels unfair.

victoriar
Автор

Two things you forgot to over:

The threat to jobs - This year my company laid off 1800 people, with the promises of re-hiring those same numbers.. to develop AI. Not everyone can work in AI development. Of the two of my coworkers who were let go, one was a tech support lead who oversaw our tech support teams who handle incoming helpdesk calls from employees - they then set it up so before they reach our helpdesk they have to go through an AI who will try to answer their questions, and the other was a program manager who worked on setting up tests of pilot programs for various accommodations for disabled coworkers.

Hallucination - Ask an AI about something it doesn't know about and at least some of the time it will create an answer, because at best they're just guessing what we want to know, you said that part yourself. For example, my company's generative AI platform made up a nonexistant company mascot for us when asked what the company mascot is (we don't have one).

CayceUrriah
Автор

At 16:10, you say AI will be better at making decisions than humans. That completely ignores the alignment problem, arguably the most important AI fear. Hitler was very good at making decisions, but they were wrong decisions. Alignment is the key issue in your infrastructure example, as well. If you ask ChatGPT about a field where you know little, it seems super smart. If you ask it about a field where you're an expert, you see it's slightly wrong a lot of the time. In reality, it's slightly wrong all the time, you just don't know enough to catch it in some fields.

rsaunders
Автор

Military ai predictions are some of the most useful and dangerous ai’s, if u act on a predicted attack that hasn’t happened yet, that could have worse consequences than doing nothing

bodeyreagan
Автор

I work for an HR podcast and have access to a lot of insight that most people don't, and I assure you, we are already at a place where AI is deciding who gets hired and who doesn't. It's not a hypothetical scenario. It's now.

StuartHetzler
Автор

The worst outcome I imagine is humanity become dumber by being overly reliant on AI

kumarsatyam
Автор

I love the redundancy argument starting at 18:32. There is a massive qualitative difference between outsourcing as a choice among options, and being dependent on outsourcing because you have forgotten (or have never learned) to grow food, maintain the grid, heat your home, etc.

Humans should learn to take care of all their own needs directly in the manner of homesteading, not only useful as a fallback, but having these skills changes the character of the choice to engage with society and trade.

Most people only choose grocery store they purchase from, they have no choice of to purchase from a grocery store because they have never learned any other way of feeding themselves. I want both choices, which and whether. It is more resilient, and it is more free.

GoodandBasic
Автор

It's funny because all of these scenarios completely gloss over the "we automated all the jobs so now nobody can eat, and they just liquidate half of the species to make room for golf courses and luxury resorts" option.

maximumPango
Автор

"Show me what's in the black box" is a statement made by a politician who knows very little about AI. Putting in a "normalized" and "balanced" dataset doesn't always work as it isn't representative of reality. Life isn't all equal in every domain, that's why the AI is able to pick up on patterns. It doesn't discriminate on the data, the data is what it is. To prevent very basic things going wrong thresholding techniques can be put in place to check the output of a model or keep a select group of people in the loop to monitor.

__Wanderer
Автор

To sum it up, the threat of AI is humans misusing it against each other.
1. Crime is largely a reaction of the population that is facing material conditions that are too unbearable, leading to people taking extralegal measures to adapt. Recognizing and addressing the material conditions will vastly reduce the prevalence of crime.
2. The fact that people rely on social media and news media as their primary, possibly their sole source of political information is what makes this method a major challenge.
3. Yeah, none of this is new. AI just allows them to do it faster and with less labor involved. The way this becomes a threat is the fact that people with exclusive power (i.e. government, corporations, etc) will use it solely to their benefit. Take away that exclusivity of power, and the benefits of such misuse are nearly non-existent. Regarding China, that's just western nations projecting their own motivations to discredit their rivals.
4. AI is a tool. It should never be given its own agency when it serves a far better purpose as a means to provide useful information.
5. There is no reason to give AI control of infrastructure when we only need it to automate the labor-intensive tasks and give us the results.
6. AI is a tool of automation. Its value is in taking labor-intensive and reducing the time and effort required to get from into to output.

greevar
Автор

1:19 It's spelled "Python" not "Phython"

haltarys
Автор

I honestly don't see how any of these scenarios are specific to AI. All of these problems could arise with "classical" software consisting entirely of if/else statements that we have been relying on for decades now. AI is developed for cases where you cannot easily come up with a classic if/else algorithm, but when it fails it does not create more or less chaos than a bug in a classical computer program. All of the dangers mentioned in this video arise if we rely to much on fully automated systems regardless of whether they are AI or normal computer programs.

ifellasleeeeep
Автор

That yellow line inside video for your add is the best thing I saw since 2020

artyono
Автор

Prevent Crime before it happens sounds like Person of Interest

PeaceChiillax
Автор

Most of the applications of AI mentioned are not a new technology. Social scoring, traffic, and water plants all use narrow AI, a type of AI that is decades old. The goal of the big companies is developing an AGI, which is a general-purpose AI that can do everything a human can. The real risk is what happens after AGI. How fast it can develop even better AI, that is 100, 1000, million times smarter than a human. The danger for humanity is the unknown. What will happen to society when we no longer have the control. And I doubt robots and nukes will be the most rational solution for something a million times smarter than us.

skillerbg
Автор

These videos always have me locked tf in! super interesting and informative, good job

SethGreve-gm
Автор

I was an AI product manager for GE Software and now make videos about how AI actually works. The danger in AI is that it's designed in a way that guarantees bad behavior at times. Usually not, but every large AI system including ChatGPT will at times do unexpected and sometimes very bad things. The global governance of AI is not going to happen most likely, and even if it's put in place it can't guarantee it won't do very negative things. The false arrest scenario you reported on will be commonplace - especially because police already target African Americans for hostile behavior more often than any other demographic.

TheThinkersBible
Автор

What do you mean 'we're ok with credit scores'?? No one with a brain is or ever has been. They are 'normalized' because we know we the people have absolutely no power to get them revoked. The people who COULD sway our politicians all have good scores so why should most of them care?

No, Johnny, we are NOT 'okay' with credit scores.

Kisamaism