Reverse engineering generative models from a single deepfake image

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
54,277
I am not fan of FB and deleted my account long ago, but this is surely a step in the right direction on the topic of deepfakes, which I think is something that could certainly be weaponized in many ways.

Reverse engineering generative models from a single deepfake image


Deepfakes have become more believable in recent years. In some cases, humans can no longer easily tell some of them apart from genuine images. Although detecting deepfakes remains a compelling challenge, their increasing sophistication opens up more potential lines of inquiry, such as: What happens when deepfakes are produced not just for amusement and awe, but for malicious intent on a grand scale? Today, we — in partnership with Michigan State University (MSU) — are presenting a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it. Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with.

Lots more to read in the article.
 

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
36,109
Ever since deep fakes started being discussed I have thought that it would be a perfect application for machine learning.

You could have it scan a very large sample size of real and fake content to train it and then see if it is able to with confidence tell you which is which.

A lot of the AI/Machine learning being done is dumb and or borderline malicious, but these people are doing important work!
 

Nobu

[H]F Junkie
Joined
Jun 7, 2007
Messages
8,977
Someone had to do this eventually. The potential for abuse was just too big (and no doubt already going on in the wild).
 

sharknice

2[H]4U
Joined
Nov 12, 2012
Messages
3,209
From the article "The results showed that our approach performs substantially better than the random ground-truth baseline"

They're being pretty vague. I wonder what percentage they could actually identify.

Also, they require the original image of a deepfake image generated by the same algorithm to be able to do that. If it's a brand new deepfake image generated by an algorithm not yet seen it won't work. Although I believe this could also be trained to identify general deepfaking and not a specific algorithm.

Furthermore, it would be possible for someone to use the detection models to train their deepfakes be harder to detect and make them even better.

I think eventually it will just be impossible to tell the difference, even for a computer.
 

Nobu

[H]F Junkie
Joined
Jun 7, 2007
Messages
8,977
From the article "The results showed that our approach performs substantially better than the random ground-truth baseline"

They're being pretty vague. I wonder what percentage they could actually identify.

Also, they require the original image of a deepfake image generated by the same algorithm to be able to do that. If it's a brand new deepfake image generated by an algorithm not yet seen it won't work. Although I believe this could also be trained to identify general deepfaking and not a specific algorithm.

Furthermore, it would be possible for someone to use the detection models to train their deepfakes be harder to detect and make them even better.

I think eventually it will just be impossible to tell the difference, even for a computer.
You could train it by providing it with various deepfakes and NOT providing the originals, and explicitly telling it they are fake, then provide some images that are not fake and tell it they are not. Then after a while it should be able to pick out the artifacts which give away the fakes fairly reliably (as long as they occur in a good percentage of fakes and are absent in most of the unaltered images).
 

Axman

[H]F Junkie
Joined
Jul 13, 2005
Messages
15,063
This is a good thing, even though it will lead to progress going both ways. But we'll still have to deal with PEBKAC issue.

Deepfakes in movies: "Part of his body passed through a solid object, these special effects suck!"

Deepfakes in real life: "Part of his body passed through a solid object? You and your wacky theories!"
 
Joined
Jun 10, 2004
Messages
3,954
I can see a world coming soon where one can hire mercenaries to create Deepfakes to help murderers get off,or to frame people for felonies as the tech gets more sophisticated. Sci Fi will continue to become real life at warp speed.
 

staknhalo

Supreme [H]ardness
Joined
Jun 11, 2007
Messages
4,350
Ever since deep fakes started being discussed I have thought that it would be a perfect application for machine learning.

You could have it scan a very large sample size of real and fake content to train it and then see if it is able to with confidence tell you which is which.

A lot of the AI/Machine learning being done is dumb and or borderline malicious, but these people are doing important work!

It's gonna do wonders for facial recognition if wearing prosthetics/masks/glasses too [insert tinfoil emoji here]
 

travm

[H]ard|Gawd
Joined
Feb 26, 2016
Messages
1,929
I can see a world coming soon where one can hire mercenaries to create Deepfakes to help murderers get off,or to frame people for felonies as the tech gets more sophisticated. Sci Fi will continue to become real life at warp speed.
That's why I'm going to invent a secure camera that takes secure verifiable pictures and video with an auditable trail. I'll make millions selling to security companies, until China steals my designs. Patent pending.
 

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
36,109
That's why I'm going to invent a secure camera that takes secure verifiable pictures and video with an auditable trail. I'll make millions selling to security companies, until China steals my designs. Patent pending.

I'm sure there is some way to use block chain for this. It will aid in raising funds :p
 
  • Like
Reactions: travm
like this
Top