How do we prevent AI from creating deepfakes?

preview_player
Показать описание
Money Saving Expert Martin Lewis is one of the most trusted people in the country - so it's easy to take his endorsement seriously.

But it also made him an obvious target for AI scammers to replicate his face and voice and create a fake endorsement.

And that's deeply worrying because as deepfake technology becomes more ubiquitous, that kind of impersonation could make us all vulnerable to fraud.
Рекомендации по теме
Комментарии
Автор

AI is not the problem. The problem is how humans manipulate AI

XxTheAwokenOnexX
Автор

£50 for lunch in this day and age is very believable

georgesmith
Автор

The simple answer is that you cannot stop this. Anyone who thinks you can is delusional.

The cat is out of the bag already. The world has to live with it.

seriousmaran
Автор

Five to ten years from now you will not have a clue as to whether what you are watching is real or not. They say there is a problem with conspiracy theorists now but soon nobody will know what the truth is. You will just have to guess and hope. People will go vaguely mad I think.

Hereford
Автор

The scale of this power in the right hands can do allot of wrong

AdamEuroS
Автор

How do i know if this report clip is deepfake or not 😅

PrincShun
Автор

How do you stop cyber criminals from abusing AI when the cyber criminals work within corrupt Governments.

bigbova
Автор

Solution:
What I think could help is if with each ABC, BBC, RT clip (or any other media), there will be in the bottom corner of the screen a special semi-transparent code from which you can know the exact date to a second, channel/creator, etc. and when you put this code into a search engine, you will be able to find an archive of the original if you find the information you are looking at as possible fake. Of course, this would do nothing against government-supported alteration of history in which case also the original archive will be altered if new additions will not be constantly copied by an independent pro-democratic organization. Feel free to offer a better solution against deep fakes.

IonorRea
Автор

Developing a safe word with your family almost like an in person password could be a good way to prevent this kind of voice replication scamming. It will be interesting how AI/Reality produced content will be deciphered in the future. Seems as though some sort of authentification proceeder with need to be set up to prevent deception.

MW
Автор

Will the time come when man will invent something for the benefit of mankind without thinking of the evil use and personal gain he may obtain?

ssartre
Автор

How do we even know if this video isn't deep faked?

yzie
Автор

Pandora is out of the box. No putting it back. It's already far to late to do anything about it.

OperationDx
Автор

This is going towards "I, robot" movie We are digging our own grave

adelhmonteiro
Автор

I fear this is the beginning of the end. None of the big tech companies are willing to pull the switch on AI now as it will lead to competitors dominating and taking over. Its scary to think what we will be seeing in a few years time. I worry for my children and what world they will be living in the future.

btcharlee
Автор

How do you prevent pegius software and deep fake imagery?

JamesPhillips-pg
Автор

Why did tech companies ever create these AI tools? Surely they could have predict this.

ilqoqcn
Автор

This tech is only going to get better smh.

Mattbriggs
Автор

Reminds me of the BBC drama "the capture".

lee
Автор

I suppose it’s the video and audio version of fake text in papers and books. I suppose like now when choosing where to source what you read, so you will have to source diligently what you watch and listen to. I’m more worried about ai automating jobs than anything.

jimmy
Автор

Create an identity to be checked when the things are real, something to give legitimate, a seal.
I am quite disturbed why until today nobody ( authorities ) created an internet identity for us. Why not ?

SuzanaMantovaniCerqueira