Is DeepFake Really All That? - Computerphile

preview_player
Показать описание
How much of a problem is DeepFake, the ability to swap people's faces around? Dr Mike Pound decided to try it with colleague Dr Steve Bagley.


This video was filmed and edited by Sean Riley.


Рекомендации по теме
Комментарии
Автор

The solution is to require all videos featuring a politician to be recorded while they dance and there are disco lights everywhere.

bagandtag
Автор

I was just thinking: You probably filmed this in the garden because of covid, but it also just gives such a nice atmosphere compared to the videos filmed in windowless rooms. Maybe you could keep doing that even if it's no longer necessary?

flurki
Автор

It's interesting that emergency broadcasts from world leaders, surely among the videos you most badly want never to be faked, are the easiest type to fake. One face looking straight towards a camera with minimal emotion.

EDoyl
Автор

We should deep fake that dirt off of the paper facing camera lens!

doougle
Автор

It's 100 degrees and Mike is still flexing on us with the sweater. Mad lad.

SlopedOtter
Автор

If Dr. Mike had his own channel, I would watch every minute of it

notoriouskiller
Автор

If arm waving makes the process more difficult I rekon Italians are safe from deepfakes for an extra couple of years

theophrastusbombastus
Автор

I think Mike is selling himself short about whether Tom Cruise's face on his body would look realistic.

CathyInBlue
Автор

9:30, I like how he briefly considered mentioning the most common use case and then clearly decided against it.

genentropy
Автор

One of the problems of using cryptographic signing for image verification is 'what are you signing?'
You're taking the input stream from the camera, and signing that, but how do you know it's coming from the camera? Because the library calling the 'sign this please' function told you it was. A software vulnerability could let someone call the method with arbitrary content, and you'd have no way to know.

Even if you move the cryptography into hardware, there is still going to be a place between the camera and writing the signed data to the network or storage device which could be attacked. (How are you securing/validating the key material? How are you authenticating the date/time info? Was it the front camera, the back camera, or a USB camera which was used? (I have a USB display adapter which appears as a USB camera on another computer... whatever program I want to display on that will show up on theUSB camera. ))

zenithparsec
Автор

The solution is something that already exists in PDF signing: the timestamping server. You have to get all your frames timestamped within x seconds of recording

henryzhang
Автор

"Sort that out in post for me". I was really hoping it would change over to animation and just scribble the legs out. 😀

marklonergan
Автор

Im glad that people are worried about this. I wouldnt put money on it not becoming a major problem in a few years time.
... but, AI has a tendency to plateau. It keeps following the same pattern over and over again pretty much since its inception. From the Turing Test, to voice recognition, to driving cars, the tools of AI are brought to bear, there is a burst of activity as they rapidly ramp up to accomplish things that were previously considered impossible, but almost as rapidly it runs in to walls all over the place, and it all grinds down, leaving problems frustratingly far away from the finish line for literally decades. (and counting.)
-
In the case of deep learning AIs* these 'walls' seem inherent to the black box nature of the thing. You can tweak the dials, set it going, and get some impressive results, but the closer you want to get it to perfection, the more you have to fine tune the whole thing. That becomes increasingly intractable, because ultimately you dont know what the AI is doing under the hood. (You have people train AIs with AIs and other such tricks to try to get around these issues.)
-
You could argue that you just need to increase the volume of data its learning from, but that also appears to get harder and harder. Compilations of hundreds of images have become compilations of thousands, tens of thousands, hundreds of thousands, and yes the AI improves, but by smaller and smaller increments. You ultimately end up falling in to exactly the same trap: How do I get the AI to work better with these image sets, you have to fine tune it, how do you fine tune something that is fundamentally quite opaque? With great difficulty...
-
Ill reiterate, I wouldnt put money on this, Ive studied AI but it certainly wasnt the central aspect of my degree, and I could be proven dramatically and horrifically wrong tomorrow. Hope for the best, prepare for the worst and all that.
but I also wont lose sleep over the coming fake video frenzy. Especially as other means of manipulating the populace have proven themselves incredibly effective, considerably simpler (Assuming you have the money and influence to steer it.), are already doing inestimable amounts of damage to us, and consequently are presently far more terrifying to me than the possibility of truly convincing Deep Fakes at some point in the future.
-
*I say in the case of deep learning, but its a heuristic problem that extends to all the forms of AI Im aware of, there is just nuance in exactly how.

xtieburn
Автор

Also, deepvoice is so, so much easier, and already 100% believable. So you can fake a recording, and quite a few people can do it.

masansr
Автор

I just know there will be a perfect deepfake of Steve doing this video in a year or two, just to prove his "5-10 years" timeline wrong.

masansr
Автор

just watched your 5 year old stegnanography video, it was amazing. I am happy that you are still active and spreading knowledge.thank you sir

Markwhatney
Автор

6:25 "return a nice picture of me [...] nice is relative" aaaand zoom on the face. well done!

stannone
Автор

Maybe you could have CCTV cameras when they store the footage after compression and the like, for each frame they use a private public key combo to make a signature using SHA 256 (if it's fast enough). Then use the public key to check if the file is genuine. Basically this would make each frame of the video verifiable. If thats too slow, you could probably do the same thing but in segments of video 1 min long or the like.

boggless
Автор

one way you could detect face swaps is finding the video on the internet where you don't count the face, since other things will still be the same in the video

bilboswaggings
Автор

I'm not ready for the phishing companies to start making deepfakes of my parents in distress so I'll send them Visa gift cards.

DanielSavageOnGooglePlus