Deepfake Technology

From New Media Business Blog

(Difference between revisions)
Jump to: navigation, search
(Measures Taken Against Deepfake Pornography)
(Adult Entertainment)
Line 22: Line 22:
Of that 96%, 41% of subjects featured American and British celebrities, but an astonishing 25% of deepfake pornography feature K-pop stars. In fact, 3 K-pop stars were listed in the top 10 individuals most frequently targeted on Deepfake pornography. <ref name=r6> </ref>
Of that 96%, 41% of subjects featured American and British celebrities, but an astonishing 25% of deepfake pornography feature K-pop stars. In fact, 3 K-pop stars were listed in the top 10 individuals most frequently targeted on Deepfake pornography. <ref name=r6> </ref>
===Measures Taken Against Deepfake Pornography===
===Measures Taken Against Deepfake Pornography===

Revision as of 02:39, 30 July 2020



Insert intro about deepfake technology

History of Deepfake Technology


How does Deepfake Technology work

Deepfake Technology Techniques and Methodology

Deepfake Technology Algorithms

How is Deepfake Technology Used


With the use of deepfakes, videos of particular politicians saying and doing things they have not actually said and done have circled the internet. The reason why politicians are common victims is due to the high profile nature of the position, as deepfake technology requires a high quantity of photos to learn from, and there is an abundance of resources the technology can learn from when trying to learn about a politician’s face.

There are fears over how deepfakes could potentially influence elections if the technology were to be used maliciously. An altered video could persuade voters that a candidate said/did something they did not, which could be detrimental to their campaign. During an election period, “October surprises” - when harmful information about a candidate is deliberately released right before the election, are common. Experts believe the circulation of deepfakes during the 2020 election will be prevalent in the "October surprises" of this election. [1]

Methods of detecting deepfakes are not developed as quickly as new methods of creating realistic deepfakes and therefore lead to concerns that deepfakes hurting a candidate’s image will not be disproven until after the voting period.[2]

Obama's Public Service Announcement

An example of the use of deepfakes on politicians is the infamous Public Service Announcement video from Obama. Buzzfeed and filmmaker Jordan Peele had used AI to create a video of Obama, and pasted Peele’s mouth over Obama’s jaws.[3] Peele then proceeded to say all sorts of things through Obama’s mouth, such as insults at Donald Trump. Although the video was humorous to many viewers the realism demonstrated how dangerous the technology could be if it were to be used to intentionally spread false information.

Italy's Matteo Renzi's News Show

In 2019, an Italian satirical news show had broadcasted doctored video clips of various politicians saying inappropriate things, including former prime minister Matteo Renzi.[4] He was shown to be insulting other Italian politicians, and the clip spread quickly on social media. Citizens did not realize the video was a deepfake and were outraged by Renzi’s inappropriate behaviour as they believed the video to be authentic.

Adult Entertainment

Although deepfake technology has existed for decades it came to light through the adult entertainment industry. It’s widely used to target celebrities and high profile figures by featuring them in inappropriate adult media that they were not involved in, without their consent. In 2019 there were 14,678 deepfake videos, and an astonishing 96% of those videos were of pornographic nature. [5] According to Deeptrace, a deepfake detection site, the top four deepfake porn sites have over 134 million views.

Of that 96%, 41% of subjects featured American and British celebrities, but an astonishing 25% of deepfake pornography feature K-pop stars. In fact, 3 K-pop stars were listed in the top 10 individuals most frequently targeted on Deepfake pornography. [6]

Measures Taken Against Deepfake Pornography

In response to the rise of deepfake pornography, many online platforms have taken a stance to limit the posting and distribution of such content on their site.

In January 2018, Gfycat announced the ban of deepfakes on its site and started actively removing posts with deepfake content.[7]

In February 2018, Twitter banned posts containing deepfakes and announced they would suspend accounts that are suspicious of posting such content. Pornhub also announced the ban of deepfake on the website, stating that nonconsensual videos are an automatic violation of their terms of service. [8]

After some pressure from the public as online platforms actively advocated for the removal of deepfakes, Reddit joined the wave soon after Twitter by banning the r/deepfakes subreddit for involuntary pornography.[9]

Film Industry

Identifying Deepfake Technology

Detecting Common Visible Characteristics

Fact-checking has become an increasingly greater challenge for users, especially on digital platforms. Whether it is in video or picture format, deepfake technology can be used to create convincing videos that can stump AI “deep learning” algorithms.[10] However, regardless of the technology underlying deepfake technology, users can use simple techniques to spot queues to distinguish fake from reality.

Facial Hints

Subtle hints in the face of the character should not be easily dismissed as simulated faces rely heavily on photographs, some of which cannot be imitated as they are hard to find online.[11] For instance, deepfake algorithms struggle to create facial features that can mimic blinking. [12] Further facial appearances and geometric features in deepfake videos, including movement of the lips, teeth, and skin can also be telling if a video is real or fake. In fact, researchers have noted that common characteristics in deepfake videos include the discoloration and blurring in these same characteristics of the face, which is the result of inconsistent rendering. [13]


Though unnoticeable to some, other common characteristics in deepfake videos may also include poorly synched lip-synching with the audio and blatantly offensive material. Given the current political climate and trend of fake news, it comes as no surprise to audiences that deepfake technology can be used for malicious intent and hoaxes. Therefore, it goes without saying that, the best way to spot deepfake technology is to ask yourself if the headline or content of the video aligns with the brand and overall persona of the individual included in the video.[14] Simply said, fact-checking is the best way to combat deepfake technology, as well as any misinformation on the Web. In most instances, the subtle manipulation of truth can be easily spotted because of an unconvincing performance or delivery of a “not-so subtle” message.


Similar to how poor examples of Photoshop and Facetune can be easily noticeable in photos, manipulated content created with deepfake technology can be easily spotted by audiences. To add, poor lighting in deepfake videos can often result in warped, distorted backgrounds, and even mismatched shadows.[15] As mentioned earlier, face discolouration is also a common characteristic that is easily pointed out, due to the lack of consistent rendering made possible in deepfake videos. [13]

Detecting Hidden Characteristics

To counteract misinformation created by deepfake technology, researches have turned to utilizing technologies to differentiate legitimate videos between videos manipulated by AI. For example, in an effort to protect world leaders against deepfakes, researchers alike to those from the University of California, were able to map out measures for facial tracking and behaviour. [16] Open-source facial behavior analysis toolkits have also been made widely available. For example, OpenFace provides facial recognition software that is capable of facial recognition, head pose and eye-gaze estimation due to its use of deep neural networks. [17]

Deep Learning

Deep learning relates to artificial intelligence as a subset of machine learning and can be used to draw data to capture facial manipulations from deepfake technology. Thorough analysis of both unstructured and structured data, deep learning algorithms can unravel patterns in deepfake videos and images. [18] This method utilizes neural networks to cluster and detect similarities in deepfake, allowing for researchers to accomplish the challenging task of validating predictions on potential deepfake videos and images. [19] Notable successful examples from researchers in exposing deepfake videos include using deeplearning to detect face warping[20] and inconsistent headposes.[21] Though the results from these machine-learning models are encouraging, limitations still exist as accuracy rates can mostly rage from 65%-85%, with further accuracy dropping if tested with previously unseen deepfakes.[22]

Growth in Deepfake Technology

Circulation on the Web

In 2020, Instagram reported over 50 billion images on its platform.[23] Needless to say, although absurd, this is an example of how normal, everyday users can leave themselves vulnerable to deepfake technology by leaving behind a personal digital fingerprint. Recent studies have also shown the growth in popularity of deepfake videos on the Internet. In 2019, researchers found 14,698 deepfake videos circulating the web, most of which were pornographic in nature. [24] Not only does this represent a growth of over 84% in only a short span of 9 months between 2018 and 2019, but it also paints a stark reality for society of how deepfake technology will be used in revenge porn and cyber-bullying. Further projections on the growth of deepfake technology are limited as the availability of deepfake videos on the Internet continue to grow at unprecedented rates. [25]

Industry Trends

Synthetic Media

As businesses continue to balance restrictions from the ongoing coronovirus pandemic, creative directors have looked to deepfake videos to substitute for real-life actors. A more recent example can be seen in corporate videos, as businesses can use AI to generate characters to deliver corporate training to employees.[26]

Marketplace Sellers

As the popularity of deepfake videos continues to normalize itself on the modern-day Web, so will the demand for the customization of deepfake videos. Current open-source software applications allow for the revolution of deepfake technology, but online service platforms may grow to accelerate the process of developing custom-made videos.[27] These platforms will not only allow for the advertisement of such custom-services, but also allow for amatuer users to better learn how to use open-source deepfake technology tools more effectively and share their work globally.

Social Media and Commercial Use

Some social media platforms have already allowed for users to use face-swap filters to mimic the deepfake AI algorithms.[28] This raises privacy concerns as users are putting themselves further at-risk of losing their personal information when exposed to biometric scans on social media platforms. As the commercialization of these face-swapping technologies continue, social media companies will be faced with the challenge of detecting deepfakes and tracking propagation. Notable examples may be seen in the upcoming elections, as deepfake technology can be used in politics to influence voter decisions and sway users to believe false content.[29]

Implications of Deepfakes

Positive Benefits

The benefits provided by such technology include improvements in artistic industries. For example, deep fake technology can be used to help us experience things that no longer exist, or never have. This technology has been used to edit video without the need for a reshoot, or recreate long-dead artists. The technology also has potential in medical fields, with current studies focusing on how deep fake technology can assist in synthesizing realistic data that will help researchers develop new ways of treating diseases without actual patient data. In 2018, the Mayo Clinic and MGH & BWH Center for Clinical Data Science collaborated to create deepfake MRI scans. They found that by training algorithms on synthesized medical images and on only ten percent real images, their algorithms became just as good at spotting tumours as an algorithm trained on exclusively real images.

Negative Impacts

The negatives are those which have been experienced already, as deep fakes are often created to spread misinformation and have disrupted political campaigns through manipulated videos of politicians, which damage the democratic process by causing mass confusion. This has been argued to be reducing political freedom as voters may not be able to distinguish fakes from true news. Deepfakes have also caused harm to other individuals who become targets of pornographic deep fakes and face trauma, embarrassment, and reputational damage as a result. As a whole, deepfakes reduce the credibility of real news through a process called reality apathy, which is when constant contact with misinformation compels people to stop trusting what they see and hear, and therefore lose trust in existing news outlets which may in fact be reporting the truth.



United Kingdom

In the United Kingdom in May 2018, a 25 year old was jailed for sixteen weeks and fined for creating a pornographic fake of an office intern (Gallagher, 2019). This ruling fell under existing laws the UK had in place to combat revenge porn as the deep fake was created in retaliation, and again there have been calls from groups such as the Women and Equalities Committee for a specific law being put in place to create better protections (Gallagher, 2019).

United States of America

In the US, California passed legislation stating it was illegal to create or distribute deep fakes, justifying this action as an act to protect its voters from convincing misinformation in the upcoming 2020 election (Paul, 2019). The state viewed deep fakes as weaponized misinformation, but the law has come under scrutiny as well under free speech protections, and it was also noted that in the US “it may be easier to challenge deep fake videos through copyright claims rather than through new laws” (Paul, 2019). The state has also passed another law allowing anyone to sue someone who publishes a deepfake with their likeness attached to pornographic material without their consent. In other states, although “no law regulates deep fakes, though some legal and technical experts have recommended adapting current laws covering libel, defamation, identity fraud or impersonating a government official” (Harwell, 2019)


China took a slightly different approach to combating deepfakes in 2019, as instead of outright banning their production and spread, the Cyberspace Administration of China stated that due to the political risks deepfakes created, publishers of deepfake content would be required to provide a disclaimer clearly identifying the content as fake (Woollacott, 2019). However, this law did not clarify how China planned to enforce the new regulation as deepfakes are difficult to detect from the onset. Furthermore, this new offence was expected to be monitored by content providers who would also be held accountable if deepfakes are found on their platforms (Woollacott, 2019).

Preventing and Combating Deepfakes

Prevention Methods

Protection strategies for Victims

Protection strategies for Businesses and Corporations




Future uses of Deepfakes

Photography Industry

Topaz Labs

Topaz Labs is a software firm that sells various image-editing tools. With the use of AI, they developed their best selling software, Gigapixel AI, which can allow users to convert a low-resolution image into a high-resolution image. Gigapixel AI does this by making the image 600% bigger. It is trained to understand and mimic how pixels look around each other and feeds each pixel through 2 million processes that analyze and optimize the pixel so that blurry ones are replaced by the AI-generated ones, resulting in a clear-cut crisp image.

Voice-Acting Industry


Modulate is a startup software company that offers what they call ‘voice skins’ which allows users to turn their voice into anyone else’s. It works by training AI on many samples of someone’s voice. Its goal is to license the technology to gaming companies so that players are able to sound like the characters that they play in video games. To deter people from using fake voices to commit fraud, Modulate does not plan to offer the service outside of gaming companies, so that users are only able to use voice skins while playing the games. As an additional measure, each recording has been embedded with an audio thumbprint, which can be easily detected when needed by looking at the audio’s waveform.


  13. 13.0 13.1
  18. /
Personal tools