Unless you live under a rock, you’ve heard about deepfakes. Deepfakes are videos that blend reality with fiction by training a generative neural network to replace a person’s face with someone else’s. These videos are making news, and not always in a good way. They can create the idea that a person — usually a celebrity or political figure — is doing something they did not do. And sometimes, they are made and distributed with malicious intent.
Like it or not, deepfakes have become ubiquitous. You will continue to see them as software innovation makes them easier to make — and harder to spot.
As deepfakes become more prolific, you might wonder if there’s anything to worry about. Should you be concerned about someone using your images to create a video that depicts you doing something malicious? Should you take steps to protect yourself from this potential situation?
In short, yes, you should be worried.
But before you call the FBI, keep reading to learn more about the risk of deepfakes and how to lower your personal exposure.
The rise of deepfakes — where they started, where they’re going
The concept of manipulating images isn’t new. Nineteenth-century photographers could alter photos when developing and printing images. After motion pictures were introduced in the late 1800s, special effects — manipulating images seen on the screen — were often used to better control what the audience saw and to tell the story.
The term “deepfake” took this concept much further. In 2017, a Reddit user named “deepfakes” made and shared pornographic videos with a famous person’s face placed onto a performer’s body. The first victims were Hollywood actresses. The idea spread quickly, and the user created a dedicated subreddit for these videos. Reddit subsequently banned that subreddit and all “involuntary pornography” content.
This is how deepfakes and their villainous reputation were first birthed, popularized, and scrutinized. But this unethical history does not mean that deepfakes can’t be used in positive ways.
Just like computer-generated imagery (CGI), used since the 1970s to alter images, filmmakers today use deepfakes to de-age actors playing a younger version of themselves in a role and create “new” footage of real historical figures.
Deepfakes are also used to:
- create parody videos
- produce photorealistic avatars in video games
- allow players to place their faces on their avatar
- bring historical figures back to life
Most of us have seen the results of faceswapping technology in some form or other, whether in movies or funny videos. It’s all fun and games—until someone’s mom stops talking to them.
Deepfakes have the potential to cause severe damage. If a deepfake video shows you doing something illegal or even just socially unacceptable, you could lose your job, get arrested, or get “canceled” — even if everyone knows it’s fake.
Deepfakes can alter the process and results of politics by enacting reputational damage, swaying electoral trust, and waging information warfare.
Even your money is at risk. A deepfake video could cause the stock market to plunge, like when a fake tweet cost a company billions of dollars. On a more personal level, a deepfake image might even be able to fool your online bank’s ID verification, allowing a stranger access to your bank account bypassing your password and security.
As face-swapping technology and AI image generators improve, spotting deepfakes gets harder. Nearly anyone can make a realistic deepfake today.
These scenarios are concerning, but before you get too worked up, let’s examine whether regular people like us need to worry about deepfakes.
How to protect yourself from deepfakes
How much risk is there that you’ll someday star in an unauthorized deepfake video? If you’re famous, the possibility is very real. A quick Google search of “deepfake scandals” turns up many celebrities, politicians, and high-profile internet personalities.
But if you’re a regular old Joe like most of us, it’s much less likely someone will be interested in creating a deepfake of your likeness. If the idea of being in a deepfake still concerns you, there are ways to protect yourself and your reputation.
The easiest way to prevent deepfakes is to monitor your media.
- Know where your images are, whether on social media, video-sharing platforms, or other places.
- Control your online images. Make your profiles private, double-check your online settings on all your platforms, and don’t post videos or pictures others can access. In fact, the best thing to do is not post any images of yourself at all. Limiting the images available prevents deepfakers from getting enough data about you to make a convincing video. And without quality images, any deepfakes they create will be low quality and likely won’t fool anyone.
- Monitor your family and friends’ posts, too. They may post something and tag you. If they do, remove the tags immediately. Ask them not to post images of you and to remove any they’ve already posted.
Here’s the rub: Anyone who posts images of themselves online is at risk. If you don’t want to take that risk, don’t post pictures of yourself or let anyone else post them.
Now, what about detecting deepfakes? Unfortunately, at the moment, detecting deepfakes is something of a cat-and-mouse game. Tomorrow’s innovations may elude today’s detectors. For now, you’ll have to rely on those old-fashioned eyeballs to detect and flag suspected deepfakes.
What to do if you’re exposed
If you become the victim of an unauthorized deepfake, the first step is to contact the platform where the video was posted and follow their instructions for removal. Pornhub, Facebook, Instagram, Tiktok, Twitter, and Google have their own procedures. You may need to send a photo of yourself, a copy of your ID, or maybe a video showing these so be ready to prove you are who you say you are (ironic no?).
What about professional help? Public relations companies can actively monitor and manage your online reputation. This measure isn’t cheap, but if you need serious assistance, PR firms specialize in this kind of reputation management and will help you detect and combat the fallout from a deepfake.
So now what?
Why do I still persist in making deepfakes if the danger is so real? I believe that awareness is the key to disarming the dangers of Deepfakes and other AI technologies. With increased public awareness of deepfake videos and how easy they are to make, people will scrutinize them more thoroughly. They are more likely to question out-of-character video content before blindly believing what they see. This awareness simply can’t be created in any other way.
Here at DeepMake, we champion ethical, educational, and positive uses of deepfake technology. We do not condone unethical actions such as non-consensual pornographic deepfakes. We’ve created an Ethical Manifesto that details our zero tolerance for anyone using DeepMake technology for dishonest purposes, inappropriate content, or without the consent of those affected.
If you’re interested in learning more about deepfakes, how they’re made, and how to protect yourself today and in the future check out the book Exploring Deepfakes, by Bryan Lyon and Matt Tora. Available in bookstores everywhere.
We realize that there are unethical deepfakers out there, and the threat of losing control of your image is real. We believe the best way to shield yourself from deepfakes is to take control of your online identity. Unethical deepfakers can’t create a deepfake if they don’t have the data to train against. Take the time to review your online presence, your privacy settings, and even your interest in participating in social media online to give yourself the greatest protection.