Deepfake Technology

From New Media Business Blog

Jump to: navigation, search

Contents

Introduction

Deepfake technology utilizes machine learning to generate synthetic images. The technology can also be used to manipulate content with another person’s likeness which has been the source of controversy when used in a negative manner. It can create deepfakes of videos, images, and audio. The best way of combating against the negative effects created by deepfakes is through education and prevention, as well as legal routes. Major companies, like Facebook and Google, are providing datasets and benchmarks to enable researchers and universities to develop new technology that better detect tampered content. [1]Despite negative implications such as deepfakes encouraging the spread of misinformation, this technology also has potential benefits in artistic industries and in the medical field.[2] [3]

History of Deepfake Technology

Origins: 1997

Deep fake technology first emerged as early as 1997, when an academic paper “The Video Rewrite: Driving Visual Speech with Audio” was published. [4] The research was conducted by Slaney Interval Research Corporation, and was written by Christoph Bregler and Michele Covell. [4]This paper was the first to introduce the idea of deep fake technology, although at the time they referred to this concept as Video Rewrite. The technology would use existing footage to automatically create a video of a person speaking words that were from a different audio track, based on reference footage of the same object of interest. [4] This early version of deepfake technology used computer vision techniques to track points on the speaker’s mouth and then used morphing techniques to combine these mouth movements. [4] The researchers created this technology with the intent for it to be used in movie dubbing, as creating a deepfake would be more cost-efficient than hiring local actors to reshoot scenes in different languages. [4] This was the first instance of a facial animation system which was fully automated and relied on creating connections between sounds produced and the shape of the subject’s face. The researchers demonstrated their new creation by reanimating footage of John F Kennedy saying, “ I never met Forrest Gump”, however at the time the quality of the video made it fairly easy to recognize the footage as in-genuine. [4]

New Surge: 2017

Examples of Deepfakes Created with Fakeapp [5]

There were relatively few notable developments in this field for a decade, but deep fakes became a topic of interest again 20 years later, in July 2017. The University of Washington released a video of a computer-generated Barack Obama using unique algorithms which depicted him speaking on various topics such as fatherhood and terrorism. [6]Barack Obama was the chosen candidate for their research "because machine learning techniques need available video of the person to learn from, and there were hours of presidential videos in the public domain".[7] They hoped the technology would be used to improve video conferencing, as streaming audio instead of video takes up less bandwidth, and could also to be used for futuristic applications in artistic fields. [7] Furthermore, the university planned to reverse their algorithm to potentially create a method of detecting deepfakes.[7] The same year in November 2017, a major event brought deep fakes to the attention of the public when a Reddit user by the name “deepfakes” posted pornographic videos with the faces of celebrities swapped onto the actors in the videos., causing controversy.[6] Celebrities who were targeted included Scarlet Johansson and Emma Watson, whose widely viewed videos led to the Reddit user "deepfakes" becoming a pivotal point for future uses of this technology. [6]

Deepfakes in Politics: 2018

Following these events, manipulated videos of politicians, celebrities, and other public figures have gone viral, raising concern over the regulation and detection of deepfakes. Viral examples include the July 2018 video of Alexandria Ocasio-Cortez being edited to appear as though she was unable to answer basic questions during an interview. Many viewers did not realize the content they were viewing was fake and brought into question her political abilities. [6] In January 2018, Fakeapp was developed and made available for download with a reported 120,000 downloads within its first two months of availability.[8] This easily accessible technology allowed the average user to be able to manipulate and create deep fakes themselves, marking a significant rise in the popularity of and in the number of deepfakes circulating the internet, with the deepfakes often targeting politicians specifically. [6]

Regulation: 2019

The next notable event surrounding deepfake technology emerged late 2018 to early 2019, as online platforms began banning deep fakes after it was reported that the technology was being used to create and circulate revenge porn. [9] This caused further controversy, given that the videos weren’t actual recordings of real people. Sites such as Discord and Pornhub explicitly banned the use of deepfakes, and Reddit soon followed suit in February 2018. [9] In 2019, the US House Intelligence Committee began hearing official testimony around deep fakes and discussing the national security threats it posed. Other countries have also begun to address the threat of deep fakes in an official capacity. [9]

U.S. Congress Deepfake Hearing[10]

Today

In 2020, deep fakes have caused significant reputational damage to public figures, but have also been used in industries such as film to enhance footage. Deepfakes are being recognized as highly controversial, and as such, various countries are working towards their regulation of the technology. Although many platforms have banned the circulation of deep fakes on their websites, other sites such as Facebook have only recently made changes to their policies. Facebook stated on January 7th of this year that it would remove videos altered by artificial intelligence in ways meant to mislead viewers, however critics also pointed out that its new guidelines do not apply to all doctored videos, only those which are edited or synthesized in a way in which an average user would not spot easily.[11] It will continue to allow manipulated videos which are satirical in nature, which in essence would allow videos such as the infamous video of Nancy Pelosi appearing intoxicated to continue being spread. This “policy does not get to the core issue of how their platform is being used to spread disinformation… but rather how professionally that disinformation is created.”[12]


How does Deepfake Technology work

Methods for Visual Deepfakes

As the term implies, deepfakes are a combination of the words “deep learning” and “fake”. Creating deepfakes utilizes artificial intelligence to learn similarities between images until it can generate its own [13]. The following section will explore two primary methods of creating deepfake images and videos.

Variational Autoencoder Method

With the use of variational autoencoders, thousands of face shots of a subject and/or target are utilized to generate synthetic material and media. Through the use of this technology, deepfake creators can use photos sourced from the internet, or take snippets from videos to get the images. As the encoder needs to see the face from a variety of angles and lighting conditions, it is best to use a variety of sources to get a diverse set of images of the subject and target. As a drawback, this technology requires a considerable amount of time and storage space due to the large amounts of data that autoencoders process and later generate. For this reason, users who choose to use this method may be limited to low-quality outputs of deepfake, unless a device with greater storage space and processing is used during the process. [14] The autoencoders share an encoder, which learns the similarities between the two and reduces the person’s features to a latent space. The latent space represents the key features of a person’s face and body posture. For example, mouth, head position, and eyes are important features to capture in this state. Once completed, the decoder reconstructs the image from the latent space with a decoder that was trained with the likeness of the target, thus superimposing the likeness of the subject on the underlying facial features and posture of the target video. [15]

Diagram of generating a deepfake[14]

Generative Adversarial Network Method

A second methodology is called the generative adversarial network (GAN). It builds upon the first method, but is used to create completely synthetic images. In this methodology, two algorithms are pitted against each other. Specifically, one is fed random noise attemps to develop a sample image from it, while the other is fed a stream of real images. This discriminator compares the two images and determines which one is more realistic. As expected, GAN allows for slightly more realistic images after every comparison as this process is completed a countless number of times, which eventually trains the generator to produce a slightly more realistic image every after every comparison. Eventually, a highly realistic image is created. [16]

Diagram of generating a deepfake - Generative adversarial networks [16]

Audio Deepfakes

When it comes to audio deepfakes, there are a variety of organizations offering synthetic voice generators, such as Lyrebird, Modulate, Resemble AI, and WellSaid [15]. These tools utilize proprietary machine learning algorithms, which "train" by analyzing audio samples and text transcripts[17]. From these same audio samples and text transcripts artificial intelligence models can produce a synthetic model This use of AI technology to manipulate audiovisual content can be related to cloning and manipulating an individual’s voice. The rise of this technology in audio deepfake scams are on the rise, as greater cases of fraud are being reported of using this technology to leave malicious voice mails. Though audio deepfakes can be recognizable to most, the quality is passable and may leave others susceptible to giving up sensitive information. As audio deepfakes continue to improve in the future, along with the sophistication of the technologies that produce synthetic media, greater precautions should be made necessary as there will be greater concern for tele-marketing fraud and phone scams.

Top level value chain of AI voice interface[15]

Furthermore, along with security and privacy concerns, this use of audio cloning further puts into the question future implications of consent and compensation for the use of an individual's voice. The American Federation of Television and Radio Artists have cautioned the public of how future misuse or replications of celebrity voices may call for greater concern and even potential lawsuits. [18]

Technology Requirements

Creating deepfakes is resource intensive and requires a powerful computer to run. Key components to produce high-quality deepfake material include a high-end graphics card (GPU) with a lot of virtual random access memory (VRAM), a decent central processing unit (CPU) with many cores[18], plenty of random access memory (RAM), and a large amount of harddrive space (HDD) purely dedicated to creating a deepfake.

Specifications

The most important part of creating deepfakes requires a high-end GPU and lots of RAM. The above chart combines recommendations from a variety of sources, but the minimum specifications is the absolute minimum and could take a week to generate. The optimal specifications can render a deepfake in hours, whereas a less optimal device may fail to do so in a reasonable amount of time.

Minimum Specifications Optimal Specifications
GPU Nvidia GTX with at least 2gb VRAM[19] Nvidia GTX 2080 with 11gb VRAM[20]
CPU Intel i3 / AMD 9 Intel i5 / AMD 12
Dedicated HDD Space 20GB 100GB
RAM 8GB 12GB

How is Deepfake Technology Used

Politics

A deepfake Donald Trump (right) based on talkshow host Jimmy Fallon’s impression of the president (left)[21]
With the use of deepfakes, videos of politicians saying and doing things they have not acted on have circled the internet. The reason why politicians are common victims is due to the high profile nature of the position.Deepfake technology requires a high quantity of photos to learn from, and there is an abundance of resources the technology can learn from when trying to learn about a politician’s face. [22]

There are fears over how deepfakes could potentially influence elections if the technology were to be used maliciously. An altered video could persuade voters that a candidate said or did something they did not, which could be detrimental to their campaign and mislead voters. During an election period, “October surprises' which is when harmful information about a candidate is deliberately released right before the election, are common. Experts believe the circulation of deepfakes during the 2020 election will be prevalent in the "October surprises" of this election.[23] Methods of detecting deepfakes have not been developed as quickly as new methods of creating realistic deepfakes have. Therefore, there are concerns that deepfakes hurting a candidate’s image will not be disproven until after the voting period.[24]

Obama's Public Service Announcement

An example of the use of deepfakes on politicians is the infamous Public Service Announcement video featuring Obama. Buzzfeed and filmmaker Jordan Peele had used AI to create a video of Obama, and pasted Peele’s mouth over Obama’s jaws.[25] Peele then proceeded to speak on various topics through Obama’s mouth, such as insults directed at Donald Trump. Although the video was humorous to many viewers, the realism demonstrated how dangerous the technology could be if it were to be used to intentionally spread false information.

Italy's Matteo Renzi's News Show

In 2019, an Italian satirical news show had broadcasted doctored video clips of various politicians saying inappropriate things. Targets included former Prime Minister Matteo Renzi.[26] He was shown to be insulting other Italian politicians and the clip spread quickly on social media. Citizens did not realize the video was a deepfake and were outraged by Renzi’s inappropriate behaviour, believing it was authentic.

Adult Entertainment

Statistics in Deepfake Pornography[27]
Although deepfake technology has existed for decades, it became commercialized through the adult entertainment industry. It’s widely used to target celebrities and high profile figures by featuring them in inappropriate adult media that they were not involved in, without their consent. In 2019, there were 14,678 deepfake videos, and an astonishing 96% of those videos were of pornographic nature. [28] According to Deeptrace, a deepfake detection site, the top four deepfake porn sites have over 134 million views. Of that 96%, 41% of subjects featured American and British celebrities, but an astonishing 25% of deepfake pornography feature K-pop stars. In fact, three specific K-pop stars were listed in the top ten individuals most frequently targeted in Deepfake pornography. [29]

Measures Taken Against Deepfake Pornography

In response to the rise of deepfake pornography, many online platforms have taken a stance to limit the posting and distribution of such content on their site.

January 2018: Gfycat Bans Deepfakes

Gfycat is a popular GIF platform, and is used to convert videos into small GIFs that are easily shared online to sites such as Twitter and Facebook, but most prominently Reddit. In 2018, Gfycat announced that it would ban deepfakes on its site and began actively removing posts with deepfake content.[30] However, as Gfycat is external to social media platforms, users were finding that although deepfakes on Gfycats were taken down, the posts on social media with broken Gfycat links were still up.

February 2018: Pornhub Bans Deepfakes

In February 2018, Pornhub announced they would ban AI-generated pornographic media on their site. Controversially however Pornhub sent out a “Year in Review” statistics in 2019, with the most searched for people being celebrities that had no involvement with the adult industry. Pornhub had banned the term “deepfake” from its search function, but deepfake porn videos were still very easily accessible by searching the name of a deepfake video’s target.[31]

February 2018: Reddit Bans Deepfakes

In February 2018, Reddit announced their updated policies on non consensual sexual imagery. The site ruled that media of people in a sexual context, created and posted without the subject’s permission, would be banned from the site. The subreddit r/deepfakes, which was a subreddit devoted entirely to AI-generated pornographic content featuring celebrities, was banned.[32]

January 2020: Facebook Bans Deepfakes

Facebook announced that AI-manipulated videos will be banned from the website as of January 2020. However, the criteria that needs to be met in order for a deepfake video to be removed is that it must be edited in a manner that is not obvious to the average person. This implies that if a doctored video is somewhat easily identified, Facebook will not take it down. [33]

February 2020: Twitter Bans Deepfakes

Twitter announced a new policy regarding manipulated visual media in February 2020. The company announced that they would ban harmful “synthetic or manipulated” media, and label non harmful deceptive media as “manipulated media”.[34] According to twitter, harmful content encompasses content that threatens the physical safety of someone or cause “widespread civil unrest”.[34] Non-consensual deepfake porn videos meet the harmful content criteria, as well as violate Twitter’s non-consensual nudity policy.[35]

Film Industry

In the film industry, deepfakes are used to push creative limits and test boundaries set by previous technology. There are numerous ways deepfake technology is used in the film industry, including improving overall video quality, colourizing old footage, language dubbing, voice regeneration, digital de-aging, and digital resurrection.

Film Industry

In the film industry, deepfakes are used to push creative limits and test boundaries set by previous technology. There are numerous ways deepfake technology is used in the film industry, including improving overall video quality, colourizing old footage, language dubbing, voice regeneration, digital de-aging, and digital resurrection.

Colourization

Deepfake technology can also be used to turn black and white content into colour. Deep learning, developed by NVIDIA, allows for artists to colourize one frame and watch the AI colourize the rest of the film in real time. [36] This is made possible by the redundancy between information on different frames, and the “the convolutional neural network infers what the colours should be from just one colourized frame and fills in the colour in the remaining frames.” [36]

Language Dubbing and Voice Regeneration

Deepfake has been used for language dubbing, as in dubbing films in multiple languages, which has reduced the cost of employing local voice actors.[37] Through voice regeneration, the machine learning model can learn the voice of an actor and this would also be useful in cases where an actor is unable to speak due to medical conditions, as deep learning allows production to move forward.[37]

Samuel L. Jackson De-aged for Captain Marvel[38]

Digital De-aging

A notable example of deepfake usage in the film industry is digital de-aging. Movies which have utilized this effect include “Captain Marvel”, where Samuel L. Jackson was de-aged approximately 25 years. The visual effects team used Jackson’s older movies as a reference and the process was done frame-by-frame to ensure a natural end result. [39]

Digital Resurrection

Digital resurrection is the reconstruction of someone from scratch, most often a late actor. An example of this is the digital resurrection of Peter Cushing, who passed in 1994. However, in 2017 he made an appearance on the Star Wars instalment “Rogue One” which was praised for its realism. [40]


Digital Resurrection invites a debate of moral and legal consequences, as some people believe it’s morally wrong to resurrect an actor who has passed. Furthermore, concerns of grief hallucinations stemming from the stress of seeing resurrected figures has been raised by experts. [41] Others believe that it should be accepted with the late actor’s family’s permission because the actor has made such a significant impact in the role before passing, that re-casting cannot do the legacy justice. Despite the controversy, companies such as Luka and the MIT Media lab are investing heavily into research and further development in this field. [41]

Identifying Deepfake Technology

Detecting Common Visible Characteristics

Fact-checking has become an increasingly greater challenge for users, especially on digital platforms. Whether it is in video or picture format, deepfake technology can be used to create convincing content that can stump AI “deep learning” algorithms.[11] However, regardless of the underlying deepfake technology, users can use simple techniques to spot cues to distinguish fake content from reality.

Deepfake image of Nicolas Cage's face on Elon Musk's head [42]

Facial Hints

Subtle hints in the face of the character should not be easily dismissed as simulated faces rely heavily on still images, some of which cannot be imitated as they are difficult to find online.[43] For instance, deepfake algorithms struggle to create facial features that can mimic blinking. [44] Further facial appearances and geometric features in deepfake videos, including movement of the lips, teeth, and skin can also distinguish if a video is real or fake. In fact, researchers have noted that common characteristics in deepfake videos include the discoloration and blurring of these facial features, which is the result of inconsistent rendering. [45]

Content

Though unnoticeable to some, other common characteristics in deepfake videos may also include poorly synched lip-synching with the audio and blatantly offensive material. Given the current political climate and trend of fake news, it comes as no surprise to audiences that deepfake technology can be used for malicious intent and hoaxes. Therefore, it goes without saying that, the best way to spot deepfake technology is to ask yourself if the headline or content of the video aligns with the brand and overall persona of the individual included in the video.[46] Simply said, fact-checking is the best way to combat deepfake technology, as well as any misinformation on the Web. In most instances, the subtle manipulation of truth can be easily spotted because of an unconvincing performance or delivery of a “not-so subtle” message.

Audio/Lighting

Similar to how poor examples of Photoshop and Facetune can be easily noticeable in photos, manipulated content created with deepfake technology can be easily spotted by audiences. To add, poor lighting in deepfake videos can often result in warped, distorted backgrounds, and even mismatched shadows.[42] As mentioned earlier, facial discolouration is also a common characteristic that is easily pointed out, due to the lack of consistent rendering made possible in deepfake videos. [45]

Detecting Hidden Characteristics

To counteract misinformation created by deepfake technology, researches have turned to utilizing technologies to differentiate legitimate videos between videos manipulated by AI. For example, in an effort to protect world leaders against deepfakes, researchers like those from the University of California, were able to map out measures for facial tracking and behaviour. [47] Open-source facial behavior analysis toolkits have also been made widely available. For example, OpenFace provides facial recognition software that is capable of facial recognition, head pose, and eye-gaze estimation due to its use of deep neural networks. [48]

Adobe trains AI to detect deepfakes and photoshopped images [49]

Deep Learning

Deep learning relates to artificial intelligence as a subset of machine learning and can be used to draw data to capture facial manipulations from deepfake technology. Through analysis of both unstructured and structured data, deep learning algorithms can unravel patterns in deepfake videos and images. [50] This method utilizes neural networks to cluster and detect similarities in deepfake, allowing for researchers to accomplish the challenging task of validating predictions on potential deepfake videos and images. [51] Notable successful examples from researchers in exposing deepfake videos include using deeplearning to detect face warping[52] and inconsistent headposes.[53] Though the results from these machine-learning models are encouraging, limitations still exist as demonstrated by accuracy rates that mostly range from 65%-85%, with further accuracy dropping if tested with previously unseen deepfake content.[54]

Growth in Deepfake Technology

Circulation on the Web

In 2020, Instagram reported over 50 billion images on its platform.[55] Needless to say, although absurd, this is an example of how normal, everyday users can leave themselves vulnerable to deepfake technology by leaving behind a personal digital fingerprint. Recent studies have also shown the growth in popularity of deepfake videos on the Internet. In 2019, researchers found 14,698 deepfake videos circulating the web, most of which were pornographic in nature. [56] Not only does this represent a growth of over 84% in only a short span of 9 months between 2018 and 2019, but it also paints a stark reality of how deepfake technology will be used in revenge porn and cyber-bullying. Further projections on the growth of deepfake technology are limited as the availability of deepfake videos on the Internet continue to grow at unprecedented rates. [57]

Industry Trends

Synthetic Media

A deepfaked alien Jeff Bezos [58]

As businesses continue to balance restrictions from the ongoing coronavirus pandemic, creative directors have looked to deepfake technology to substitute for real-life actors. A more recent example can be seen in corporate videos, as businesses can use AI to generate characters that deliver corporate training to employees.[59]

Marketplace Sellers

As the popularity of deepfake videos continues to normalize itself on the modern-day Web, so will the demand for the customization of deepfake videos. Current open-source software applications allow for the revolution of deepfake technology, but online service platforms may grow to accelerate the process of developing custom-made videos.[60] These platforms will not only allow for the advertisement of such custom-services, but also allow for amatuer users to better learn how to use open-source deepfake technology tools more effectively and share their work globally.

Social Media and Commercial Use

Some social media platforms have already allowed for users to use face-swap filters to mimic the deepfake AI algorithms.[61] This raises privacy concerns as users are putting themselves further at risk of losing their personal information when exposed to biometric scans on social media platforms. As the commercialization of these face-swapping technologies continue, social media companies will be faced with the challenge of detecting deepfakes and tracking propagation. Notable examples may be seen in the upcoming elections, as deepfake technology can be used in politics to influence voter decisions and sway users to believe false content.[62]

Implications of Deepfakes

Positive Benefits

The benefits provided by such technology are prominent in artistic industries. For example, deep fake technology can be used to help us experience things that no longer exist, or never have through methods such as digital resurrection. This technology has been used to edit videos without the need for a reshoot, or recreate long-dead artists. [63]The technology also has potential in the medical field, with current studies focusing on how deep fake technology can assist in synthesizing realistic data that will help researchers develop new ways of treating diseases without exposing actual patient data. The Mayo Clinic and MGH & BWH Center for Clinical Data Science in 2018 collaborated to create deepfake MRI scans, and found that, “by training algorithms on these medical images and on 10% real images, these algorithms became just as good at spotting tumours as an algorithm trained only on real images".[63]

Statista: Where concern is highest about fake news on the Internet in 2019 [64]

Negatives

The negatives of deepfake technology are those which have been experienced already, as deepfakes are often created to spread misinformation and have disrupted political campaigns through manipulated videos of politicians which damage the democratic process by causing mass confusion.[9] This has been argued to be reducing political freedom as voters may not be able to distinguish fakes from true news. Deepfakes have also caused harm to other individuals who have become targets of pornographic deepfakes, which has resulted in trauma, embarrassment, and emotional and reputational damage as a result.[65] Deepfakes also have the effect of reducing the credibility of real news through a process called reality apathy, which is when constant contact with misinformation compels people to stop trusting what they see and hear, and therefore lose trust in existing news outlets which may in fact be reporting the truth.[2]

Raising Implications on Fake News

Though the manipulation of videos and photos may spell further concerns in the future, current implications are already being felt in greater populations as the threat of fake news continues to propagate among users on the Internet. One of the consequences of all that concern is a greater awareness of trusted news sources. Countries surveyed in 2019, have expressed high concern for fake news on the Internet, which can be correlated to the growth of deepfake videos circulating the Internet. In fact, 67% of Americans surveyed are reported to be concerned with fake news, which have led to greater reliance on more reputable new sources for information.[66]

Legality

Canada

In Canada, the legal community is still in the development stage of creating specific laws to address the harm caused by deepfakes. Until then, there are existing causes of action under Canadian law that may be referenced. These include laws covering copyright infringement, defamation, violation of privacy, appropriation of personality, and under the criminal code.[67] To address deepfake pornography for example, under the Criminal Code of Canada, any deepfakes featuring individuals under the age of 18 can be prosecuted under charges of child pornography. Furthermore, revenge pornography, extortion, fraud, and criminal harassment charges can also be made depending on the circumstances of the deepfake.[67] Appropriation of personality and defamation infractions may provide celebrity victims means of retribution as well, as a great majority of early deepfakes did feature celebrities whose images are widely available online, making them an ideal target. To address the use of deepfakes in the political sector, Canadian lawmakers have considered adding possible pathways to persecution under the Canada Elections Act. Section 480.1 of the act includes provisions for publishing false statements to affect election results, with five-year jail terms and up to $50,000 in fines. [67]

United Kingdom

In the United Kingdom in May 2018, a 25-year-old was jailed for sixteen weeks and fined for creating a pornographic deepfake video of an office intern. [68] This ruling fell under existing laws the UK already had in place to combat revenge porn as the deepfake was created in retaliation. Despite this, there have been calls from groups such as the Women and Equalities Committee for a specific law to be put in place to provide better protection for victims of deepfakes. [68]

United States of America

In the US, California passed legislation stating it was illegal to create or distribute deepfakes of politicians, justifying this action as an act of protecting its voters from convincing misinformation in the upcoming 2020 election. [69] The state viewed deepfakes as weaponized misinformation, but the law has come under scrutiny under free speech protections, and it was also noted that in the US “it may be easier to challenge deepfake videos through copyright claims rather than through new laws”.[69] The state has also passed another law allowing anyone to sue someone who publishes a deepfake with their likeness attached to pornographic material without their consent. In other states, although “no law regulates deep fakes, though some legal and technical experts have recommended adapting current laws covering libel, defamation, identity fraud or impersonating a government official”. [11]

China

China took a slightly different approach to combating deepfakes in 2019. Instead of outright banning their production and spread, the Cyberspace Administration of China stated that due to the political risks deepfakes created, publishers of deepfake content would be required to provide a disclaimer clearly identifying the content as fake. [70] However, this law did not clarify how China planned to enforce the new regulation as deepfakes are difficult to detect from the onset. Furthermore, this new offence was expected to be monitored by content providers who would also be held accountable if deepfakes are found on their platforms. [70]


Preventing and Combating Deepfakes

As mentioned previously, deepfakes can have disastrous effects for both individuals and businesses. At the core of these strategies is prevention and education, as the best way to combat deepfakes is to prevent them from occurring in the first place.

Protection Strategies for Victims

Deepfakes can be time-consuming to create, which makes becoming a victim of deepfake less likely for the average individual. For most, this may come at a relief as the chances of becoming victimized from deepfake pornography may be lower than anticipated, as based on available historic information, deepfake pornography has been largely targeted towards celebrities[28]. However, technology often grows exponentially, thus allowing for disgruntled exes to create deepfake pornography in an act of revenge.

A core strategy to combat the creation of deepfake and synthetic media is to reduce the amount of photographs or selfies of yourself on the internet. As most high-tech deepfake software requires plenty of images to generate a realistic deepfake, it is important to not have that source material in the first place. As mentioned before, deepfakes of audio can also be created for malicious purposes. These deepfakes could potentially be utilized as scamming tools. For example, one can use an audio clip of a victim’s family member pleading for money and further convince others to commit acts of fraud. Awareness of this tactic is key for prevention, and educating family members of this new technology could prevent further potential damage from happening. Another tactic may include requesting usage of video calling, as it would be practically impossible for a deepfake to be created on the spot, as long as the video is not required or stored for others to steal and manipulate.

Non-consensual deepfakes, regardless of the nature of the content, can be taken down through a variety of methods.

Cyber Civil Rights Initiative

It is worth noting that this this source primarily deals with non-consensual pornography. It was created after the founder, Holly Jacobs, wanted to empower other victims who faced non-consensual pornography to fight back against non-consensual, leaked content.[71] According to the website, there are two general steps to take when taking down images on social media channels:

DMCA Services[72]
  • Ensure questions violate platform policies
    • Facebook, Instagram, Reddit, Tumblr, Twitter, and Yahoo do not allow porn
    • Google does not allow nonconsensual and/or fake pornography on its search engine [72]
  • Document the post
    • There are specific guidelines for each social media platform on how to report and remove inappropriate or involuntary content
    • For Google, victims can use this form to remove searches on the search engine. The caveat is that although it no longer shows up on the search engine, the host site could still have it.

The next strategy involves removing content hosted on websites other than the aforementioned social media channels. This can include chat forums and porn sites.

DMCA

The main tool here is to utilize DMCA.com, which stands for Digital Millennium Copyright Act. This site provides services to help individuals and businesses take down copyrighted material, which is backed up within the act itself. Though the act is based in the United States, the website provides services which comply with the Canadian version of the DMCA takedown. The Canadian version falls under the Copyright Modernization Act of Canada [73].

Utilizing this service does not mean that you can only take down material that was formally copyrighted. If the content is the victim’s property or they are the subject of the photo or video, it is enough to justify its takedown. It is a paid service and is $199 USD per site. However, for the more technically inclined, there is an option to pay $10 a month for a Do-it-Yourself package which provides users with a toolkit to learn and conduct takedowns.

Protection Strategies for Businesses and Corporations

Education

Deepfakes can cause a multitude of problems for businesses. Though small and enterprise businesses alike, often have the capital to invest in detection software, technology is only part of the solution to prevent deepfakes from causing harm. As mentioned previously, education is the best way to combat the threat of deepfales, as it brings awareness to potential victims who may fall victim to users who utilize this technology for malicious intent [74] To ensure further protection from users looking to cause harm through deepfake, companies should look into additional cybersecurity workshops and training sessions. It is advised to keep sessions short and to the point, with an emphasis on listing out top threats, how those threats look, and standard operating procedures when handling them.

Security Protocols

Formalized security protocols are an excellent strategy to complement training sessions. To prevent this from happening, companies can establish protocols that specify a procedure that employees should follow, should they receive a suspicious call. Bitsight recommends instilling a “trust but verify” mantra when handling calls and requests[75]. For example, scammers typically use phone calls with fake voices impersonating other staff in order to get crucial information. Despite adding an extra step in processes, this precaution can act as a means of two-factor authentication, which is worth the time and effort for any business.

Detection Software

For the companies that can afford it, AI-powered detection software has been created to utilize the same deep learning algorithms that develop deepfakes to detect if content has been altered. Another technology involves watermarking content with a tool called Amber Detect. Amber Detect’s mission is to detect malicious deepfakes that are “sowing disinfo and deception”[76].

Use Cases in Technology Giants

Several large companies, like Facebook and Google, have been trying to stay ahead of the curve and develop tools to combat deepfakes.
Sample from Facebook dataset[77]

Facebook

Facebook in particular has been partnering with universities like Massachusetts Institute of Technology (MIT), Cornell, and UC Berkeley to build the Deepfake Detection Challenge (DFDC)[1]. The goal of the challenge is to provide academic researchers with deepfake datasets and benchmarks. This in turn allows those researchers to develop technology which can detect when AI has been used to alter a video. Facebook is also including a leaderboard to incentivize the industry to innovate and develop the best methods for detecting deepfake content. [77] The data set consists of over 100,000 videos and eight facial modification algorithms. The data set, created by Facebook, utilizes real, paid actors doing a variety of tasks. [1]

Google

Similar to Facebook, Google has also created a large dataset using paid actors. However, Google takes on a different approach and uses the dataset to create thousands of deepfakes using publicly available methods[78]. The dataset has been publicly released and is intended to assist researchers in developing methods and new technology that can detect deepfakes from real clips.[78] Google has also been advancing their research and contributions to include fake audio detection. Google utilized their proprietary text-to-speech technology, often used in their Google Maps or Google Translate applications, to create thousands of phrases. The phrases were drawn from newspaper articles and utilized 68 different voices that covered different accents. The dataset was then released to 150 research teams during the 2019 ASVspoof challenge, which invites researchers to create new technology which detects and combats fake audio.[79]

Future uses of Deepfakes

Photography Industry

Topaz Labs

Gigapixel AI Demonstration [80]
Topaz Labs is a software firm that sells various image-editing tools. With the use of AI, they developed their best selling software, Gigapixel AI, which allows users to convert a low-resolution image into a high-resolution image. [81]Gigapixel AI does this by making the image 600% bigger. It is trained to understand and mimic how pixels look around each other and feeds each pixel through two million processes. These processes analyze and optimize the pixel so that blurry ones are replaced by the AI-generated ones, resulting in a clear-cut crisp image.[81]

Voice-Acting Industry

Modulate

Modulate is a startup software company that offers what they call ‘voice skins’, which allow users to turn their voice into anyone else’s. It works by training AI technology on many samples of someone else’s voice. Its goal is to license the technology to gaming companies so that players are able to sound like the characters that they play in video games. [81]To deter people from using fake voices to commit fraud, Modulate does not plan to offer the service outside of gaming companies, so that users are only able to use voice skins while playing the games. As an additional measure, each recording has been embedded with an audio thumbprint, which can be easily detected when needed by looking at the audio’s waveform.[82]

Conclusion

Deepfake technology is an emerging technology trend that holds tremendous potential in providing benefits and causing harm to society. Areas affected heavily by deepfakes include the film industry, politics, and adult entertainment. Deepfakes cause harm through spreading false misinformation, damaging the political process, and by inflicting emotional trauma on victims of deepfake pornography. With these negative implications in mind, the emergence of this technology also sheds to light security risks that leave both businesses and individuals vulnerable to malicious attacks. Growing cyber threats are becoming more common within today’s digital age, which may call for greater legislation from governments and officials, as the rate of detecting deepfakes is not as fast as new and more effective methods of creating them are. Benefits provided by this technology stem from its potential in the healthcare industry by creating highly realistic data sets, as well as in the film industry where deepfake technology has been used to create higher quality content. Film industries, content producers, and every-day consumers now have the power to edit existing digital media to produce unique content. However, uses such as digital resurrection have caused controversy as well. As society progresses into a new digital ecosystem, they will be faced with the challenge of determining the purposes of this technology and how to best combat against synthetic media. Further implications may mean changes in consumer behaviour as users become more wary of the content and material they consume, which may lead to more skepticism and fact-checking on the Internet. Deepfake technology has the potential to greatly benefit society, but its negative effects must be regulated in order for its benefits to be fully realized.

Authors

Woody Iu Arden Thandi Raven Bruan Abbey Cho
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada

References

  1. 1.0 1.1 1.2 https://ai.facebook.com/datasets/dfdc/
  2. 2.0 2.1 https://www.forbes.com/sites/simonchandler/2020/03/09/why-deepfakes-are-a-net-positive-for-humanity/#4fc94a522f84
  3. https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth
  4. 4.0 4.1 4.2 4.3 4.4 4.5 https://www.researchgate.net/publication/220720338_Video_Rewrite_Driving_Visual_Speech_with_Audio
  5. https://www.malavida.com/en/soft/fakeapp/#gref
  6. 6.0 6.1 6.2 6.3 6.4 https://www.businessinsider.com/deepfakes-explained-the-rise-of-fake-realistic-videos-online-2019-6#in-june-2019-a-deepfake-of-mark-zuckerberg-appeared-on-instagram-facebook-also-decided-to-leave-it-up-setting-a-precedent-for-leaving-manipulated-videos-on-their-platforms-9
  7. 7.0 7.1 7.2 https://www.washington.edu/news/2017/07/11/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video/
  8. https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html
  9. 9.0 9.1 9.2 9.3 https://www.cnet.com/news/facebook-twitter-and-youtube-grapple-with-altered-videos-ahead-of-the-2020-election/
  10. https://venturebeat.com/2019/06/13/u-s-congress-targets-deepfakes-ahead-of-2020-election/
  11. 11.0 11.1 11.2 https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/?noredirect=on
  12. https://www.washingtonpost.com/technology/2020/01/06/facebook-ban-deepfakes-sources-say-new-policy-may-not-cover-controversial-pelosi-video/
  13. https://www.foxnews.com/tech/terrifying-high-tech-porn-creepy-deepfake-videos-are-on-the-rise
  14. 14.0 14.1 https://arstechnica.com/science/2019/12/how-i-created-a-deepfake-of-mark-zuckerberg-and-star-treks-data/
  15. 15.0 15.1 15.2 https://medium.com/swlh/the-role-of-deepfake-audio-in-the-growing-ai-voice-market-a68fd7fd8944
  16. 16.0 16.1 https://medium.com/@saibharath897/generative-adversarial-networks-gans-560c5c988128
  17. <https://pitchfork.com/thepitch/what-does-jay-zs-fight-over-audio-deepfakes-mean-for-the-future-of-ai-music/
  18. 18.0 18.1 https://www.theverge.com/2020/1/29/21080553/ftc-deepfakes-audio-cloning-joe-rogan-phone-scams
  19. http://www.deepfakestate.com/blog/system-requirements-for-deepfaking-2-12-2018
  20. https://medium.com/@jsoverson/from-zero-to-deepfake-310551e59aa3
  21. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
  22. https://www.creativebloq.com/features/deepfake-examples
  23. https://www.cnbc.com/2019/10/15/deepfakes-could-be-problem-for-the-2020-election.html
  24. https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/
  25. https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed
  26. https://observers.france24.com/en/20191008-deepfake-video-former-italian-pm-matteo-renzi-sparks-debate-italy
  27. https://koreajoongangdaily.joins.com/2020/05/17/features/deepfake-artificial-intelligence-pornography/20200517190700189.html
  28. 28.0 28.1 https://mc.ai/96-of-deepfakes-are-pornographic/&sa=D&ust=1595284287011000&usg=AFQjCNHwssupNLZ2rF47zFiUf1-G6Z_5Ew
  29. https://koreajoongangdaily.joins.com/2020/05/17/features/deepfake-artificial-intelligence-pornography/20200517190700189.html&sa=D&ust=1595284887916000&usg=AFQjCNGizoWfE0jNyFAk6sqzUIhwPzia2g
  30. https://www.theverge.com/2018/1/31/16956460/gfycat-ai-generated-fake-porn-deepfakes-subreddit-reddit
  31. https://www.vice.com/en_us/article/wxez8y/pornhub-year-in-review-deepfake
  32. https://www.theverge.com/2018/2/7/16982046/reddit-deepfakes-ai-celebrity-face-swap-porn-community-ban
  33. https://www.bbc.com/news/technology-51018758
  34. 34.0 34.1 https://www.vox.com/recode/2020/2/4/21122653/twitter-policy-deepfakes-nancy-pelosi-biden-trump
  35. https://help.twitter.com/en/rules-and-policies/intimate-media
  36. 36.0 36.1 https://news.developer.nvidia.com/ai-can-convert-black-and-white-clips-into-color/
  37. 37.0 37.1 https://amt-lab.org/blog/2020/3/deepfake-technology-in-the-entertainment-industry-potential-limitations-and-protections#:~:text=99%25%20of%20the%20circulated%20videos,entertainment%20industry%2C%20specifically%20film%20production.
  38. https://www.hollywoodreporter.com/behind-screen/how-did-marvel-de-age-samuel-l-jackson-robert-downey-jr-lola-vfx-supervisor-spills-secrets-1249544
  39. https://www.forbes.com/sites/adawson/2019/02/27/samuel-l-jackson-and-clark-gregg-de-aged-in-captain-marvel/#6a2fb0b6487b
  40. https://www.theguardian.com/commentisfree/2016/dec/21/peter-cushing-rogue-one-resurrection-cgi
  41. 41.0 41.1 https://www.nbcnews.com/mach/science/will-digital-resurrections-let-us-bring-back-dead-ncna865806
  42. 42.0 42.1 https://phys.org/news/2019-06-deepfakes-reveals.html]
  43. https://www.miteksystems.com/blog/what-is-a-deepfake-and-how-does-it-impact-fraud
  44. https://theconversation.com/detecting-deepfake-videos-in-the-blink-of-an-eye-101072/
  45. 45.0 45.1 https://www.wired.com/story/deepfakes-getting-better-theyre-easy-spot/
  46. https://slate.com/technology/2019/06/how-to-spot-deepfake-videos.html
  47. https://openaccess.thecvf.com/content_CVPRW_2019/papers/Media%20Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.pdf
  48. https://cmusatyalab.github.io/openface/#posts-about-openface
  49. https://interestingengineering.com/adobe-trains-ai-to-detect-deepfakes-and-photoshopped-images]
  50. https://blog.goodaudience.com/training-a-deep-learning-model-for-deepfake-detection-914bbceb3fa9 /
  51. https://www.technologyreview.com/2020/06/12/1003475/facebooks-deepfake-detection-challenge-neural-network-ai/
  52. https://arxiv.org/abs/1811.00656
  53. https://arxiv.org/abs/1811.00661
  54. https://arxiv.org/pdf/2001.00179.pdf
  55. https://www.omnicoreagency.com/instagram-statistics/
  56. https://www.bbc.com/news/technology-49961089
  57. https://www.businessinsider.com/dangerous-deepfake-technology-spreading-cannot-be-stopped-2019-7
  58. https://www.businessinsider.com/deepfake-video-jeff-bezos-elon-musk-star-trek-2020-2]
  59. https://www.wired.com/story/covid-drives-real-businesses-deepfake-technology/
  60. https://futurumresearch.com/research-notes/deepfake-technology-ecosystem/
  61. https://techcrunch.com/2020/01/03/tiktok-deepfakes-face-swap/
  62. https://www.brookings.edu/blog/techtank/2019/06/03/deepfakes-social-media-and-the-2020-election/
  63. 63.0 63.1 https://www.forbes.com/sites/simonchandler/2020/03/09/why-deepfakes-are-a-net-positive-for-humanity/#4fc94a522f84
  64. https://www-statista-com.proxy.lib.sfu.ca/chart/18343/share-concerned-about-what-is-real-and-fake-on-the-internet/]
  65. https://timreview.ca/article/1282
  66. https://www.statista.com/chart/18343/share-concerned-about-what-is-real-and-fake-on-the-internet/
  67. 67.0 67.1 67.2 https://lop.parl.ca/sites/PublicWebsite/default/en_CA/ResearchPublications/201911E
  68. 68.0 68.1 https://www.huffingtonpost.co.uk/entry/what-is-deep-fake-pornography-and-is-it-illegal-in-the-uk_uk_5bf4197ce4b0376c9e68f8c5?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAAUblhuTNCPNDSGHZgNvl_djF2hamgXz7KIKyhF_s1oJoqRmqpdNTuXmA_FHc2xu1f0vl2kpgVbvPYaDEvGyeEnukCK_M7nM3vK2RXhle--zVfAX1lni2mpPDyPsda_keGPdNBVaM5kbuwaNTegwSedj8nc4CtMQxYcEeb3GSyEa
  69. 69.0 69.1 https://www.theguardian.com/us-news/2019/oct/07/california-makes-deepfake-videos-illegal-but-law-may-be-hard-to-enforce
  70. 70.0 70.1 https://www.forbes.com/sites/emmawoollacott/2019/11/30/china-bans-deepfakes-in-new-content-crackdown/#4e897f1c3537
  71. https://www.cybercivilrights.org/online-removal/
  72. 72.0 72.1 https://www.dmca.com/Takedowns.aspx
  73. https://www.dmca.com/faq/Does-Canada-have-a-version-of-the-DMCA-Takedown.
  74. https://clutch.co/it-services/resources/how-to-protect-business-deepfakes
  75. https://www.bitsight.com/blog/how-to-protect-your-organization-from-the-emerging-deepfake-threat
  76. https://ambervideo.co
  77. 77.0 77.1 https://syncedreview.com/2019/09/06/facebook-introduces-dataset-challenge-to-counter-deepfakes/
  78. 78.0 78.1 https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html
  79. https://www.blog.google/outreach-initiatives/google-news-initiative/advancing-research-fake-audio-detection/
  80. https://topazlabs.com/gigapixel-ai/
  81. 81.0 81.1 81.2 https://www.fastcompany.com/90337767/deepfakes-started-a-war-on-whats-real-these-startups-are-racing-to-commercialize-it
  82. https://modulate.ai/ethics
Personal tools