Facebook and Twitter have released their policies for deepfakes


People speaking out about deepfakes helped create new laws and technology to protect others from dangerous displays of false content. For one woman, her control over her online image was taken away before she was even an adult. When Noelle Martin was 18, she googled herself. After some digging, what she found was unexpected. She saw her face on pornographic images that she knew were not her. A couple of years later, she found pornographic videos with her face that she was never in. She became a victim of several deepfakes. After processing what was done to her, she started advocating for laws to ban this type of abuse and spreading awareness that this could happen to anyone.
Deepfakes are any media that is altered with AI technology that portrays someone doing or saying something they did not do. Hollywood has used deepfakes within films for years and as technology has improved, the editing technique has translated over to the internet.
There has been a lot of attention on the topic centered around the 2020 election. With many fearing that fake media would interfere, social media platforms released new policies for dealing with fake videos, and a few states passed legislation against certain deepfakes. Deeptrace Labs, a company that researches and detects deepfakes, reported over 14,000 deepfakes in 2019, which is an 84 percent increase from 2018. But, out of those deepfakes, 96% of them were non-consensual deepfake pornography.
Martin tried to contact the websites posting the deepfakes of her. They either didn’t respond or wouldn’t take them down. She then had a decision to make. Either she could go about her life, knowing those images existed and hoped that they would not surface, or she could become an advocate for the issue and have her name forever associated with the term, deepfakes.
“When everybody around you knows your story, that is now attached to you for the rest of your life, publicly,” Martin said. “So navigating that association, even if that might be good or bad, I think it has probably been the hardest transition.”
Martin ultimately decided to become an advocate. She lives in Perth, Australia and is on her way to becoming a lawyer. She spent her early 20s being a spokesperson, including speaking at a Ted Talk in 2017, for laws criminalizing fake pornographic images. A major speaking point for her is to make sure that while handling this issue, the internet remains a place where people can still express themselves.
“I did not fight publicly or fight for law reform so that we are less free and more fearful to engage in this social media world and engage in the online world,” Martin said. “I fought for the kind of freedom to express ourselves and still be safe online.”
Since she started advocating on the issue, Martin has been harassed online. The harassment increases as she speaks out on the issues, but Martin has continued to be vocal and continues to maintain a presence on social media, despite the scrutiny directed at her.
“That’s what made me really mad,” Martin said. “I think one of the comments that would really upset me would be, ‘I don’t know if it really affected you if you still keep posting photos of yourself dressed in some kind of way.’ Absolutely it did. But that is separate from my agency over my own body and my sexuality.”
University of Southern California doctorate candidate and digital media expert, Sulafa Zidani said equal rights groups have been fighting iterations of this this for centuries.
“When we talk about sexism, this isn’t something that appeared with the internet,” Zidani said. “So if people are creating deepfake videos to take advantage of women or humiliate them or harass them that’s kind of like the child of sexism that existed way before.”
While researching meme culture, Zidani has found that the internet has allowed for more representation and places for marginalized groups to share their voices. But, the internet has also allowed hate speech to flourish too.
“The thing about misogynistic culture is that it’s dominant and its empowered. So, it’s not scared to voice itself,” Zidani said.
Bringing attention to the fact that this behavior exists can also add to the problem. People can share misinformation or defamatory content in hopes to explain why a post is problematic. But that action sends a very different message to online algorithms.
“If I’m sharing a post even if I hate it, this tells the algorithm that this post is getting a lot of attention and attention is what rules online,” Zidani said. “So we should be aware and careful how we use our own attention and how we use the attention of others.”
People find ways to discuss misinformation without triggering the algorithm. They post screen shots of the misinformation, so viewers will not click on the original link. But Zidani suggests to just ignore it.
“Sometimes the most powerful thing is to keep scrolling,” Zidani said.
In 2019, legislation passed both on the federal and state level to combat deepfakes. Three states have passed laws against specific manipulated videos falling into two different categories. California and Virginia have a law criminalizing manipulated video of pornographic or intimate images, and California and Texas have that prohibits manipulated video of political candidates.
Hollywood has been using deepfake technology for years either when an actor dies and can no longer perform a role or for special effects like the de-aging process in the 2019 film “The Irishman.” However, in Hollywood, individuals or families give their permission to the studios.
“The studios are going to be well-advised not to undertake deepfake activities that are not blessed either by the individuals if they are still alive or by their estates or representatives if they are deceased,” said attorney Douglas Mirell who represents the Screen Actors Guild-American Federation of Television and Radio Artists or SAG-AFTRA.
Mirell was in support of California Assembly Bill 602 that allows people depicted in deepfakes to sue if a video of them includes sexually explicit material. This bill passed after several female celebrities found themselves depicted in pornographic deepfakes.
“There are a lot of people that can produce content other than the studios,” Mirell said. “They don’t have the same sorts of real-world constraints.”
Creators can post videos anonymously, and anyone in the world could post content. Even if the U.S. does pass laws criminalizing fake videos, people from other countries could still post those videos.
For non-celebrities like Noelle Martin, she finds it difficult to take any action without the resources of a public figure. While laws against deepfakes have passed in Australia, Martin does not believe that alone will help other victims.
“If someone today were experiencing the same thing I went through in Australia, they would not have any prospect of justice even though we have the laws in Australia,” Martin said “They can hide themselves, and they can hide their trace well enough to avoid being held accountable.”
Several deepfake YouTube channels create manipulated videos that do not cross the line into sexual explicit. They experiment with media by swapping celebrities in movies and television. Most of these creators will label the video as a deepfake either with a watermark on the video or in the title of the video. According to Mirell, most deepfakes labeled as such by the creator are protected by the First Amendment because they are trying not to manipulate the viewer into thinking they are real.
Deepfakes
A parody or satire
Shallow fakes
An ad
Deepfakes
Shallowfakes
With people sharing these videos online, some social media companies have taken stances against manipulated videos, while others have used AI technology to their advantage. Snapchat and TikTok have embraced the use of AI technology in different filters and features. However, Facebook and Twitter both released official blog posts outlining their deepfake policies. YouTube reported that they would ban manipulated videos surrounding the 2020 election.
These policies are not as straightforward as deleting every manipulated video. Facebook and Twitter may put alerts next to videos letting viewers know that the material was manipulated. The degree that these companies deal with the video depends on the type of fake video. Deepfakes are specifically fake videos that use AI technology. Several technology companies and Universities have done extensive research building programs that can detect that technology.
“People who create deepfakes will keep improving the quality of the deepfakes, and we have to keep improving our detection methods as well,” said Wael Abd-Almageed, a research professor at the University of Southern California.
What is known as cheapfakes or shallowfakes are videos that are manipulated but not by AI. These types of videos would not be detected. If a cheapfake is put into question, social media platforms have fact checkers review those videos to judge if manipulation has occurred.
Since more people have access to better technology, cheapfakes can be just as effective as deepfakes. A video of Nancy Pelosi went viral when it appeared that she was drunk or impaired at a speaking engagement. Instead of using AI technology, the creator slowed down and adjusted the audio.
“I grew up in an era where seeing is believing,” Abd-Almageed said. “We need as a society to sort of evolve around the idea that seeing is not believing anymore.