Deepfake Technology

From New Media Business Blog

Jump to: navigation, search

Contents

Introduction

Insert intro about deepfake technology

History of Deepfake Technology

Deep fake technology first emerged as early as 1997, when an academic paper “the Video Rewrite: Driving Visual Speech with Audio” was published. [1] This paper was the first to introduce the idea of deep fake technology, although at the time they referred to this concept as Video Rewrite. The technology would use existing footage to automatically create a video of a person speaking words that were not in the original footage. [1] This early version of deepfake technology used computer vision techniques to track points on the speaker’s mouth and then used morphing techniques to combine these mouth movements, with the intent for the technology to be used in movie dubbing. [1] This was the first instance of a facial animation system which was fully automated and relied on creating connections between sounds produced and the shape of the subject’s face. The researchers demonstrated their new creation by reanimating footage of John F Kennedy saying “ I never met Forrest Gump”, however at the time the quality of the video made it fairly easy to recognize the footage as in-genuine. [1]

Timeline

1997

In 1997 the first research paper introducing video rewrite technology was published, and involved several experiments of this technology being used to create fake footage of subject’s saying phrases they had not said in the original videos. [1]

July 2017

There was relatively few notable developments in this field for a decade, but deep fakes became a topic of interest again 20 years later in July 2017. The University of Washington released a video of a computer-generated Barack Obama using unique algorithms. [2] They hoped it would be used to improve video conferencing, as streaming audio instead of video takes up less bandwidth, and also to be used for futuristic applications in artistic fields. [2]

November 2017

The same year in November, a major event brought deep fakes to the attention of the public when a Reddit user by the name “deepfakes” posted videos with celebrities faces being swapped onto pornographic content. [2] Celebrities who were targeted included Scarlet Johansson and Emma Watson, and this first uploader’s username became the reference point for future uses of this technology. [2]

July 2018

Following these events, manipulated videos of politicians, celebrities, and other public figures have gone viral raising concern over the regulation and detection of deepfakes.

Viral examples include the July 2018 video of Alexandria Ocasio-Cortez being edited to appear as though she was unable to answer basic questions during an interview, many viewers did not realize the content they were viewing was fake and brought into question her political abilities. [2]

Figure 1. Examples of Deepfakes Created with Fakeapp

January 2018

In January 2018, Fakeapp was developed and made available for download. This easily accessible technology allowed the average user to be able to manipulate and create deep fakes themselves, marking a significant rise in the popularity of and in the number of deepfakes circulating the internet. [2]

Early 2019

The next notable event surrounding deep fake technology emerged late 2018 to early 2019, as online platforms began banning deep fakes after it was reported that the technology was being used to create and circulate revenge porn. [3] This caused further controversy, given that the videos weren’t actual recordings of real people. Sites such as Twitter, Discord, and Pornhub explicitly banned the use of deepfakes, and Reddit soon followed suit in February 2018. [3]

In 2019, the US House Intelligence Committee began hearing official testimony around deep fakes and discussing the national security threats it posed. Other countries have also begun to address the threat of deep fakes in an official capacity. [3]

Figure 2. U.S. Congress Deepfake Hearing

Today

In 2020, deep fakes have caused significant reputational damage to public figures, but have also been used in industries such as film to enhance footage. Deep fakes are being recognized as highly controversial, and as such, various countries are working towards their regulation. Although many platforms have banned the circulation of deep fakes on their websites, other sites such as Facebook have only recently made changes to their policies. Facebook stated on January 7th of this year that it would remove videos altered by artificial intelligence in ways meant to mislead viewers, however critics also pointed out that its new guidelines do not apply to all doctored videos, only those which are edited or synthesized in a way in which an average user would not spot easily.[4] It will continue to allow manipulated videos which are satirical in nature, which in essence would allow videos such as the infamous video of Nancy Pelosi appearing intoxicated to continue being spread. This “policy does not get to the core issue of how their platform is being used to spread disinformation… but rather how professionally that disinformation is created.”[5]

How does Deepfake Technology work

As the term implies, deepfakes are a combination of the words deep learning and fake. Creating deepfakes utilizes artificial intelligence to learn similarities between images until it can generate its own. There are two primary methods of creating deepfake images and videos.

Methods

Variational Autoencoder Method

The first method is used for generating fake events or transposing faces on videos. There are two autoencoders that are fed thousands of face shots of the subject and the target. Deepfake creators will use photos sourced from the internet, or take snippets from videos to get the images. It is best to use a variety of sources to get a diverse set of images of the subject and target, as the encoder needs to see the face from a variety of angles and lighting conditions. This is a lengthy process and requires plenty of storage space. [6]

The autoencoders share an encoder, which learns the similarities between the two and reduces the person’s features to a latent space. The latent space represents the key features of a person’s face and body posture. For example, mouth, head position, and eyes are important features to capture in this state. Once completed, the decoder reconstructs the image from the latent space with a decoder that was trained with the likeness of the target, thus superimposing the likeness of the subject on the underlying facial features and posture of the target video. [7]

Generative Adversarial Network Method

A second methodology is called the generative adversarial network (GAN). It builds upon the first method but is used to create completely synthetic images. Two algorithms are pitted against each other. One is fed random noise and tries to develop a sample image from it. The other is fed a stream of real images. The discriminator compares the two and sees which one is more realistic. It does this a countless number of times, which eventually trains the generator to produce a slightly more realistic image every after every comparison. Eventually, a highly realistic image is created. [6]

Technology Requirements

Creating deepfakes is resource intensive and requires a powerful computer to run. Important components include a high-end graphics card (GPU) with a lot of virtual random access memory (VRAM), a decent central processing unit (CPU), plenty of random access memory (RAM), and a large amount of harddrive space (HDD).

Minimum specifications

GPU: Nvidia GTX 1080 with 8gb VRAM CPU: Intel i5 HDD: 100GB RAM: 12GB of RAM

Optimal specifications

GPU: Nvidia GTX 2080 with 11gb VRAM CPU: Intel i5 HDD: 100GB RAM: 12GB of RAM

The most important part of creating deepfakes requires a high-end GPU and lots of RAM. These critical components dictate the difference between rendering the deepfake in weeks or hours.[8]

How is Deepfake Technology Used

Politics

With the use of deepfakes, videos of particular politicians saying and doing things they have not actually said and done have circled the internet. The reason why politicians are common victims is due to the high profile nature of the position, as deepfake technology requires a high quantity of photos to learn from, and there is an abundance of resources the technology can learn from when trying to learn about a politician’s face.

There are fears over how deepfakes could potentially influence elections if the technology were to be used maliciously. An altered video could persuade voters that a candidate said/did something they did not, which could be detrimental to their campaign. During an election period, “October surprises” - when harmful information about a candidate is deliberately released right before the election, are common. Experts believe the circulation of deepfakes during the 2020 election will be prevalent in the "October surprises" of this election. [9]

Methods of detecting deepfakes are not developed as quickly as new methods of creating realistic deepfakes and therefore lead to concerns that deepfakes hurting a candidate’s image will not be disproven until after the voting period.[10]

Obama's Public Service Announcement

An example of the use of deepfakes on politicians is the infamous Public Service Announcement video from Obama. Buzzfeed and filmmaker Jordan Peele had used AI to create a video of Obama, and pasted Peele’s mouth over Obama’s jaws.[11] Peele then proceeded to say all sorts of things through Obama’s mouth, such as insults at Donald Trump. Although the video was humorous to many viewers the realism demonstrated how dangerous the technology could be if it were to be used to intentionally spread false information.

Italy's Matteo Renzi's News Show

In 2019, an Italian satirical news show had broadcasted doctored video clips of various politicians saying inappropriate things, including former prime minister Matteo Renzi.[12] He was shown to be insulting other Italian politicians, and the clip spread quickly on social media. Citizens did not realize the video was a deepfake and were outraged by Renzi’s inappropriate behaviour as they believed the video to be authentic.

Adult Entertainment

Although deepfake technology has existed for decades it came to light through the adult entertainment industry. It’s widely used to target celebrities and high profile figures by featuring them in inappropriate adult media that they were not involved in, without their consent. In 2019 there were 14,678 deepfake videos, and an astonishing 96% of those videos were of pornographic nature. [13] According to Deeptrace, a deepfake detection site, the top four deepfake porn sites have over 134 million views.

Of that 96%, 41% of subjects featured American and British celebrities, but an astonishing 25% of deepfake pornography feature K-pop stars. In fact, 3 K-pop stars were listed in the top 10 individuals most frequently targeted on Deepfake pornography. [14]

Measures Taken Against Deepfake Pornography

In response to the rise of deepfake pornography, many online platforms have taken a stance to limit the posting and distribution of such content on their site.

January 2018: Gfycat bans deepfakes

Gfycat is a popular GIF platform, and is used to convert videos into small GIFs that are easily shared online.to sites Twitter and Facebook, but most prominently Reddit. In 2018 the company announced that it would ban deepfakes on its site and started actively removing posts with deepfake content.[15] However as Gfycat is external to social media platforms, users were finding that although deepfakes on Gfycats were taken down, the posts on social media with the broken Gfycat links were still up.

February 2018: Pornhub bans deepfakes

In February 2018, Pornhub announced they would ban AI-generated pornographic media on their site. Controversially however Pornhub sent out a “Year in Review” statistics in 2019, with the most searched for people being celebrities that were not involved in porn whatsoever. Pornhub had banned the term “deepfake” from its search function, but deepfake porn videos were still very easily accessible by searching the name of a deepfake video’s target.[16]

February 2018: Reddit bans deepfakes

In February 2018, through a reddit post, Reddit announced their updated policies on nonconsensual sexual imagery. The site ruled that media of people in a sexual context, created and posted without the subject’s permission, would be banned from the site. The subreddit r/deepfakes, a subreddit devoted entirely to AI-generated pornographic content of celebrities, were banned.[17]

January 2020: Facebook bans deepfakes

Facebook announced that AI-manipulated videos will be banned from the website as of January 2020. However, the criteria for a deepfake video to be removed is that it has to be edited in a manner that is not obvious to the average person, meaning that if a doctored video is somewhat obvious that it was edited, Facebook will not be taking it down. [18]

February 2020: Twitter bans deepfakes

Twitter announced a new policy regarding manipulated visual media in February 2020. The company announced that they would ban harmful “synthetic or manipulated” media, and label non harmful deceptive media as “manipulated media”. According to twitter, harmful content encompasses those that threaten the physical safety of someone or cause “widespread civil unrest”.[19] Non consensual deepfake porn videos meet the harmful content, as well as violate Twitter’s non consensual nudity policy.[20]

Film Industry

In the film industry, deepfakes are used to push creative limits and test boundaries set by previous technology. Deepfake technology can be used to improve overall video quality, colourize old footage, for language dubbing, voice regeneration, digital de-aging, and digital resurrection.

Colourization

Deepfake technology can also be used to turn black and white content into colour. Deep learning, developed by NVIDIA allows for artists to colourize one frame and watch the AI colourize the rest of the film in real time. [21] This is made possible by the redundancy between information on different frames, and the “the convolutional neural network infers what the colours should be from just one colourized frame and fills in the colour in the remaining frames.” [21]

Language Dubbing and Voice Regeneration

Deepfake has been used for language dubbing, as in dubbing films in multiple languages, which would reduce the cost of employing local voice actors.[22] Through voice regeneration, the machine learning model can learn the voice of an actor and this would also be useful in cases where an actor is unable to speak due to medical conditions, allowing production to move forward regardless.[22]

Figure 3. Samuel L. Jackson De-aged for Captain Marvel

Digital De-aging

A notable example of deepfake usage in the film industry is digital de-aging. Movies which have utilized this effect include “Captain Marvel”, where Samuel L. Jackson was de-aged approximately 25 years. The visual effects team used Jackson’s older movies as a reference and the process was done frame-by-frame to ensure a natural end result. [22]

Digital Resurrection

Digital resurrection is the reconstruction of someone from scratch, most often a late actor. An example of this is Peter Cushing, who passed in 1994. However in 2017 he made an appearance on the Star Wars instalment “Rogue One” which was praised for its realism. [23]

Digital Resurrection invites a debate of moral and legal consequences, as some people believe it’s morally wrong to resurrect an actor who has passed. Furthermore, concerns of grief hallucinations stemming from the stress of seeing resurrected figures has been raised by experts. [24] Others believe that it should be accepted with the late actor’s family’s permission because the actor has made such a significant impact in the role before passing, that re-casting cannot do the legacy justice. Despite the controversy, companies such as Luka and the MIT Media lab are investing heavily into research and further development in this field. [24]

Identifying Deepfake Technology

Detecting Common Visible Characteristics

Fact-checking has become an increasingly greater challenge for users, especially on digital platforms. Whether it is in video or picture format, deepfake technology can be used to create convincing videos that can stump AI “deep learning” algorithms.[25] However, regardless of the technology underlying deepfake technology, users can use simple techniques to spot queues to distinguish fake from reality.

Facial Hints

Subtle hints in the face of the character should not be easily dismissed as simulated faces rely heavily on photographs, some of which cannot be imitated as they are hard to find online.[26] For instance, deepfake algorithms struggle to create facial features that can mimic blinking. [27] Further facial appearances and geometric features in deepfake videos, including movement of the lips, teeth, and skin can also be telling if a video is real or fake. In fact, researchers have noted that common characteristics in deepfake videos include the discoloration and blurring in these same characteristics of the face, which is the result of inconsistent rendering. [28]

Content

Though unnoticeable to some, other common characteristics in deepfake videos may also include poorly synched lip-synching with the audio and blatantly offensive material. Given the current political climate and trend of fake news, it comes as no surprise to audiences that deepfake technology can be used for malicious intent and hoaxes. Therefore, it goes without saying that, the best way to spot deepfake technology is to ask yourself if the headline or content of the video aligns with the brand and overall persona of the individual included in the video.[29] Simply said, fact-checking is the best way to combat deepfake technology, as well as any misinformation on the Web. In most instances, the subtle manipulation of truth can be easily spotted because of an unconvincing performance or delivery of a “not-so subtle” message.

Audio/Lighting

Similar to how poor examples of Photoshop and Facetune can be easily noticeable in photos, manipulated content created with deepfake technology can be easily spotted by audiences. To add, poor lighting in deepfake videos can often result in warped, distorted backgrounds, and even mismatched shadows.[30] As mentioned earlier, face discolouration is also a common characteristic that is easily pointed out, due to the lack of consistent rendering made possible in deepfake videos. [28]

Detecting Hidden Characteristics

To counteract misinformation created by deepfake technology, researches have turned to utilizing technologies to differentiate legitimate videos between videos manipulated by AI. For example, in an effort to protect world leaders against deepfakes, researchers alike to those from the University of California, were able to map out measures for facial tracking and behaviour. [31] Open-source facial behavior analysis toolkits have also been made widely available. For example, OpenFace provides facial recognition software that is capable of facial recognition, head pose and eye-gaze estimation due to its use of deep neural networks. [32]

Deep Learning

Deep learning relates to artificial intelligence as a subset of machine learning and can be used to draw data to capture facial manipulations from deepfake technology. Thorough analysis of both unstructured and structured data, deep learning algorithms can unravel patterns in deepfake videos and images. [33] This method utilizes neural networks to cluster and detect similarities in deepfake, allowing for researchers to accomplish the challenging task of validating predictions on potential deepfake videos and images. [34] Notable successful examples from researchers in exposing deepfake videos include using deeplearning to detect face warping[35] and inconsistent headposes.[36] Though the results from these machine-learning models are encouraging, limitations still exist as accuracy rates can mostly rage from 65%-85%, with further accuracy dropping if tested with previously unseen deepfakes.[37]

Growth in Deepfake Technology

Circulation on the Web

In 2020, Instagram reported over 50 billion images on its platform.[38] Needless to say, although absurd, this is an example of how normal, everyday users can leave themselves vulnerable to deepfake technology by leaving behind a personal digital fingerprint. Recent studies have also shown the growth in popularity of deepfake videos on the Internet. In 2019, researchers found 14,698 deepfake videos circulating the web, most of which were pornographic in nature. [39] Not only does this represent a growth of over 84% in only a short span of 9 months between 2018 and 2019, but it also paints a stark reality for society of how deepfake technology will be used in revenge porn and cyber-bullying. Further projections on the growth of deepfake technology are limited as the availability of deepfake videos on the Internet continue to grow at unprecedented rates. [40]

Industry Trends

Synthetic Media

As businesses continue to balance restrictions from the ongoing coronovirus pandemic, creative directors have looked to deepfake videos to substitute for real-life actors. A more recent example can be seen in corporate videos, as businesses can use AI to generate characters to deliver corporate training to employees.[41]

Marketplace Sellers

As the popularity of deepfake videos continues to normalize itself on the modern-day Web, so will the demand for the customization of deepfake videos. Current open-source software applications allow for the revolution of deepfake technology, but online service platforms may grow to accelerate the process of developing custom-made videos.[42] These platforms will not only allow for the advertisement of such custom-services, but also allow for amatuer users to better learn how to use open-source deepfake technology tools more effectively and share their work globally.

Social Media and Commercial Use

Some social media platforms have already allowed for users to use face-swap filters to mimic the deepfake AI algorithms.[43] This raises privacy concerns as users are putting themselves further at-risk of losing their personal information when exposed to biometric scans on social media platforms. As the commercialization of these face-swapping technologies continue, social media companies will be faced with the challenge of detecting deepfakes and tracking propagation. Notable examples may be seen in the upcoming elections, as deepfake technology can be used in politics to influence voter decisions and sway users to believe false content.[44]

Implications of Deepfakes

Positive Benefits

The benefits provided by such technology include improvements in artistic industries, for example deep fake technology can be used to help us experience things that no longer exist, or never have. This technology has been used to edit video without the need for a reshoot, or recreate long-dead artists. [15]The technology also has potential in medical fields, with current studies focusing on how deep fake technology can assist in synthesizing realistic data that will help researchers develop new ways of treating diseases without actual patient data. [15]The mayo clinic and MGH & BWH center for clinical data science in 2018 collaborated to create deepfake MRI scans, and found that, “by training algorithms on these medical images and on 10% real images, these algorithms became just as good at spotting tumours as an algorithm trained only on real images.”[15]

Negatives

The negatives are those which have been experienced already, as deep fakes are often created to spread misinformation and have disrupted political campaigns through manipulated videos of politicians which damage the democratic process by causing mass confusion.[45] This has been argued to be reducing political freedom as voters may not be able to distinguish fakes from true news. Deepfakes have also caused harm to other individuals who become targets of pornographic deep fakes and face trauma, embarrassment, and reputational damage as a result.[46] As a whole, deepfakes reduce the credibility of real news through a process called reality apathy, which is when constant contact with misinformation compels people to stop trusting what they see and hear, and therefore losing trust in existing news outlets which may in fact be reporting the truth.[45]


Raising Implications on Fake News

Though the manipulation of videos and photos may spell further concerns in the future, current implications are already being felt in greater populations as the threat of fake news continues to propagate among users on the Internet.

One of the consequences of all that concern is a greater awareness of trusted news brands. Countries surveyed in 2019, prove high concern for fake news on the Internet, which can be correlated to the growth of deepfake videos circulating the Internet. In fact, 67% of Americans surveyed are reported to be concerned with fake news, which have led to greater reliance on more reputable new sources for information.[47]


Legality

Canada

In Canada, the legal community is still in the development stage of creating specific laws to address the harm caused by deepfakes. However, currently, there are causes of action that may be referenced already under Canadian law. These include laws covering copyright infringement, defamation, violation of privacy, appropriation of personality, and under the criminal code.[48]

For example, under the Criminal Code, any deepfakes featuring individuals under the age of 18 can be prosecuted for child pornography charges. Furthermore, revenge pornography, extortion, fraud, and criminal harassment charges can also be made depending on the circumstances of the deepfake.[48]

Appropriation of personality and defamation infractions may provide celebrity victims means of retribution as well, as a great majority of early deepfakes did feature celebrities whose images are widely available online making them an ideal target.

Canadian lawmakers have also considered adding possible pathways to persecution under the Canada Elections Act. Section 480.1 of the act includes provisions for publishing false statements to affect election results, with five year jail terms and up to $50,000 in fines. [48]

United Kingdom

In the United Kingdom in May 2018, a 25 year old was jailed for sixteen weeks and fined for creating a pornographic fake of an office intern. [49] This ruling fell under existing laws the UK had in place to combat revenge porn as the deep fake was created in retaliation, and again there have been calls from groups such as the Women and Equalities Committee for a specific law being put in place to create better protections. [49]

United States of America

In the US, California passed legislation stating it was illegal to create or distribute deep fakes, justifying this action as an act to protect its voters from convincing misinformation in the upcoming 2020 election. [50] The state viewed deep fakes as weaponized misinformation, but the law has come under scrutiny as well under free speech protections, and it was also noted that in the US “it may be easier to challenge deep fake videos through copyright claims rather than through new laws”.[50] The state has also passed another law allowing anyone to sue someone who publishes a deepfake with their likeness attached to pornographic material without their consent. In other states, although “no law regulates deep fakes, though some legal and technical experts have recommended adapting current laws covering libel, defamation, identity fraud or impersonating a government official”. [51]

China

China took a slightly different approach to combating deepfakes in 2019, as instead of outright banning their production and spread, the Cyberspace Administration of China stated that due to the political risks deepfakes created, publishers of deepfake content would be required to provide a disclaimer clearly identifying the content as fake. [52] However, this law did not clarify how China planned to enforce the new regulation as deepfakes are difficult to detect from the onset. Furthermore, this new offense was expected to be monitored by content providers who would also be held accountable if deepfakes are found on their platforms. [52]


Preventing and Combating Deepfakes

As mentioned previously, deepfakes can have disastrous effects for both individuals and businesses. At the core of these strategies is prevention and education, as the best way to combat deepfakes is to prevent them from occurring in the first place.

Protection strategies for Victims

Deepfakes are time-consuming and most individuals should not be ultimately concerned with having deepfake pornography of them created, as most effort is targeted towards celebrities. However, technology often grows exponentially, and it could get easier for disgruntled exes to create deepfake pornography for revenge.

A core strategy is to reduce the amount of photographs or selfies of yourself on the internet. As most high-tech deepfake software requires plenty of images to generate a realistic deepfake, it is important to not have that source material in the first place.

As mentioned before, deepfakes of audio can also be created. These deepfakes could potentially be utilized as scamming tools. For example, one can use an audio clip of a victim’s family member pleading for money. Awareness of this tactic is key for prevention, and educating family members of this new technology could prevent damage from happening. Another tactic would be to request usage of video calling, as it would be practically impossible for a deepfake to be created on the spot.

Non consensual deepfakes, regardless of the nature of the content, can be taken down through a variety of methods.

Cyber Civil Rights Initiative

This website / source primarily deals with nonconsensual pornography. It was created after the founder, Holly Jacobs, wanted to empower other victims who faced non consensual pornography to fight back against the leaked content. [53]

According to the website, there are two general steps to take when taking down images on social media channels:

  • Make sure the post in questions is against a platform’s policies
    • Facebook, Instagram, Reddit, Tumblr, Twitter, and Yahoo do not allow porn
    • Google does not allow nonconsensual and/or fake pornography on its search engine [54]
  • Document the Post
    • There are specific guidelines for each social media platform on how to report and remove inappropriate or involuntary content
    • For Google, victims can use this form to remove searches on the search engine. The caveat is that although it no longer shows up on the search engine, the host site could still have it.

The next strategy involves removing content that are hosted on websites other than the aforementioned social media channels. This could include chat forums and porn sites.

DMCA

The main tool here is to utilize DMCA.com, which stands for Digital Millennium Copyright Act. The site provides services to help individuals and businesses takedown copyrighted material, which is backed up by the act itself. Thought the act is based in the United States, the website provides services which comply with the Canadian version of the DMCA takedown. The Canadian version falls under the Copyright Modernization Act of Canada. [55]

Utilizing this service does not mean that you can only take down material that was formally copyrighted. If the content is the victim’s property or they are the subject of the photo or video, it is enough to justify its takedown.

It is a paid service and is $199 USD per site. However, for the more technically inclined, there is an option to pay $10 a month for a Do-it-Yourself package which provides users with a toolkit to learn and conduct takedowns.

Protection strategies for Businesses and Corporations

Deepfakes can cause a multitude of problems for businesses. Though businesses, especially larger ones, often have the capital to invest in detection software, technology is only part of the solution to prevent deepfakes from causing havoc. Education, again, is the best way to combat this threat, as it is fairly novel and most employees would not be aware of this issue. [56]

Companies should look into additional cybersecurity workshops and training sessions. It is advised to keep sessions short and to the point, with an emphasis on listing out top threats, how those threats look, and standard operating procedures when handling them.

Formalized security protocols are an excellent strategy to add once the training sessions are complete. For example, scammers typically use phone calls with fake voices impersonating other staff in order to get crucial information. To prevent this from happening, companies can establish protocols that specify a procedure that employees should follow, should they receive a suspicious call. Bitsight recommends instilling a “trust but verify” mantra when handling calls and requests[57]. Despite adding an extra step in processes, it essentially acts as a two-factor authenticator, which is worth the time and effort for any business.

Finally, for the companies that can afford it, AI-powered detection software has been created to utilize the same deep learning algorithms that develop deepfakes but for detecting if content has been altered. Another technology involves watermarking content with a tool called Amber Detect. Amber Detect’s mission is to detect malicious deepfakes that are “sowing disinfo and deception”[58].

Companies Making a Change

Several large companies, like Facebook and Google, have been trying to stay ahead of the curve and develop tools to combat deepfakes.

Facebook

Facebook in particular has been partnering with universities like Massachusetts Institute of Technology (MIT), Cornell, and UC Berkeley to build the Deepfake Detection Challenge (DFDC). The goal of the challenge is to provide academic researchers with deepfake datasets and benchmarks. This in turn allows those researchers to develop technology which can detect when AI has been used to alter a video. Facebook is also including a leaderboard to incentivize the industry to innovate and develop the best methods for detecting deepfake content. [59]

The data set consists of over 100,000 videos and eight facial modification algorithms. The data set, created by Facebook, utilizes real, paid actors doing a variety of tasks. [60]

Google

Similar to Facebook, Google has also created a large dataset using paid actors. However, Google takes on a different approach and uses the dataset to create thousands of deepfakes using publicly available methods. The dataset has been publicly released and is intended to assist researchers in developing methods and new technology that can detect deepfakes from real clips.[61]

Google has also been advancing their research and contributions to fake audio detection. Google utilized their proprietary text-to-speech technology, often used in their Google Maps or Google Translate applications, to create thousands of phrases. The phrases were drawn from newspaper articles and utilized 68 different voices that covered different accents. The dataset was then released to 150 research teams during the 2019 ASVspoof challenge, which invites researchers to create new technology which detects and combats fake audio.[62]

Future uses of Deepfakes

Photography Industry

Topaz Labs

Topaz Labs is a software firm that sells various image-editing tools. With the use of AI, they developed their best selling software, Gigapixel AI, which can allow users to convert a low-resolution image into a high-resolution image. Gigapixel AI does this by making the image 600% bigger. It is trained to understand and mimic how pixels look around each other and feeds each pixel through 2 million processes that analyze and optimize the pixel so that blurry ones are replaced by the AI-generated ones, resulting in a clear-cut crisp image.

Voice-Acting Industry

Modulate

Modulate is a startup software company that offers what they call ‘voice skins’ which allows users to turn their voice into anyone else’s. It works by training AI on many samples of someone’s voice. Its goal is to license the technology to gaming companies so that players are able to sound like the characters that they play in video games. To deter people from using fake voices to commit fraud, Modulate does not plan to offer the service outside of gaming companies, so that users are only able to use voice skins while playing the games. As an additional measure, each recording has been embedded with an audio thumbprint, which can be easily detected when needed by looking at the audio’s waveform.

References

  1. 1.0 1.1 1.2 1.3 1.4 https://www.researchgate.net/publication/220720338_Video_Rewrite_Driving_Visual_Speech_with_Audio
  2. 2.0 2.1 2.2 2.3 2.4 2.5 https://www.businessinsider.com/deepfakes-explained-the-rise-of-fake-realistic-videos-online-2019-6#in-june-2019-a-deepfake-of-mark-zuckerberg-appeared-on-instagram-facebook-also-decided-to-leave-it-up-setting-a-precedent-for-leaving-manipulated-videos-on-their-platforms-9
  3. 3.0 3.1 3.2 https://www.cnet.com/news/facebook-twitter-and-youtube-grapple-with-altered-videos-ahead-of-the-2020-election/
  4. https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/?noredirect=on
  5. https://www.washingtonpost.com/technology/2020/01/06/facebook-ban-deepfakes-sources-say-new-policy-may-not-cover-controversial-pelosi-video/
  6. 6.0 6.1 https://arstechnica.com/science/2019/12/how-i-created-a-deepfake-of-mark-zuckerberg-and-star-treks-data/
  7. https://medium.com/swlh/the-role-of-deepfake-audio-in-the-growing-ai-voice-market-a68fd7fd8944
  8. https://medium.com/@jsoverson/from-zero-to-deepfake-310551e59aa3
  9. https://www.cnbc.com/2019/10/15/deepfakes-could-be-problem-for-the-2020-election.html&sa=D&ust=1595282927913000&usg=AFQjCNHmEaxeYDTXGYEs-SuJZ3wIrV3hfg
  10. https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/&sa=D&ust=1595283136605000&usg=AFQjCNHOOCRfPCTUCzm5il3AGo8suhEBNQ
  11. https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed
  12. https://observers.france24.com/en/20191008-deepfake-video-former-italian-pm-matteo-renzi-sparks-debate-italy
  13. https://mc.ai/96-of-deepfakes-are-pornographic/&sa=D&ust=1595284287011000&usg=AFQjCNHwssupNLZ2rF47zFiUf1-G6Z_5Ew
  14. https://koreajoongangdaily.joins.com/2020/05/17/features/deepfake-artificial-intelligence-pornography/20200517190700189.html&sa=D&ust=1595284887916000&usg=AFQjCNGizoWfE0jNyFAk6sqzUIhwPzia2g
  15. 15.0 15.1 15.2 15.3 https://www.theverge.com/2018/1/31/16956460/gfycat-ai-generated-fake-porn-deepfakes-subreddit-reddit
  16. https://www.vice.com/en_us/article/wxez8y/pornhub-year-in-review-deepfake
  17. https://www.theverge.com/2018/2/7/16982046/reddit-deepfakes-ai-celebrity-face-swap-porn-community-ban
  18. https://www.bbc.com/news/technology-51018758
  19. https://www.vox.com/recode/2020/2/4/21122653/twitter-policy-deepfakes-nancy-pelosi-biden-trump
  20. https://help.twitter.com/en/rules-and-policies/intimate-media
  21. 21.0 21.1 https://news.developer.nvidia.com/ai-can-convert-black-and-white-clips-into-color/
  22. 22.0 22.1 22.2 https://amt-lab.org/blog/2020/3/deepfake-technology-in-the-entertainment-industry-potential-limitations-and-protections#:~:text=99%25%20of%20the%20circulated%20videos,entertainment%20industry%2C%20specifically%20film%20production.
  23. https://www.theguardian.com/commentisfree/2016/dec/21/peter-cushing-rogue-one-resurrection-cgi
  24. 24.0 24.1 https://www.nbcnews.com/mach/science/will-digital-resurrections-let-us-bring-back-dead-ncna865806
  25. https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
  26. https://www.miteksystems.com/blog/what-is-a-deepfake-and-how-does-it-impact-fraud
  27. https://theconversation.com/detecting-deepfake-videos-in-the-blink-of-an-eye-101072/
  28. 28.0 28.1 https://www.wired.com/story/deepfakes-getting-better-theyre-easy-spot/
  29. https://slate.com/technology/2019/06/how-to-spot-deepfake-videos.html
  30. https://phys.org/news/2019-06-deepfakes-reveals.html
  31. https://openaccess.thecvf.com/content_CVPRW_2019/papers/Media%20Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.pdf
  32. https://cmusatyalab.github.io/openface/#posts-about-openface
  33. https://blog.goodaudience.com/training-a-deep-learning-model-for-deepfake-detection-914bbceb3fa9 /
  34. https://www.technologyreview.com/2020/06/12/1003475/facebooks-deepfake-detection-challenge-neural-network-ai/
  35. https://arxiv.org/abs/1811.00656
  36. https://arxiv.org/abs/1811.00661
  37. https://arxiv.org/pdf/2001.00179.pdf
  38. https://www.omnicoreagency.com/instagram-statistics/
  39. https://www.bbc.com/news/technology-49961089
  40. https://www.businessinsider.com/dangerous-deepfake-technology-spreading-cannot-be-stopped-2019-7
  41. https://www.wired.com/story/covid-drives-real-businesses-deepfake-technology/
  42. https://futurumresearch.com/research-notes/deepfake-technology-ecosystem/
  43. https://techcrunch.com/2020/01/03/tiktok-deepfakes-face-swap/
  44. https://www.brookings.edu/blog/techtank/2019/06/03/deepfakes-social-media-and-the-2020-election/
  45. 45.0 45.1 https://www.cnet.com/news/facebook-twitter-and-youtube-grapple-with-altered-videos-ahead-of-the-2020-election/
  46. https://timreview.ca/article/1282
  47. https://www.statista.com/chart/18343/share-concerned-about-what-is-real-and-fake-on-the-internet/
  48. 48.0 48.1 48.2 https://lop.parl.ca/sites/PublicWebsite/default/en_CA/ResearchPublications/201911E
  49. 49.0 49.1 https://www.huffingtonpost.co.uk/entry/what-is-deep-fake-pornography-and-is-it-illegal-in-the-uk_uk_5bf4197ce4b0376c9e68f8c5?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAAUblhuTNCPNDSGHZgNvl_djF2hamgXz7KIKyhF_s1oJoqRmqpdNTuXmA_FHc2xu1f0vl2kpgVbvPYaDEvGyeEnukCK_M7nM3vK2RXhle--zVfAX1lni2mpPDyPsda_keGPdNBVaM5kbuwaNTegwSedj8nc4CtMQxYcEeb3GSyEa
  50. 50.0 50.1 https://www.theguardian.com/us-news/2019/oct/07/california-makes-deepfake-videos-illegal-but-law-may-be-hard-to-enforce
  51. https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
  52. 52.0 52.1 https://www.forbes.com/sites/emmawoollacott/2019/11/30/china-bans-deepfakes-in-new-content-crackdown/#4e897f1c3537
  53. https://www.cybercivilrights.org/online-removal/
  54. https://support.google.com/websearch/answer/3143948?vid=1-635802893385623583-4153477963&visit_id=637319240601652925-2953187746&rd=1
  55. https://www.dmca.com/faq/Does-Canada-have-a-version-of-the-DMCA-Takedown.
  56. https://clutch.co/it-services/resources/how-to-protect-business-deepfakes
  57. https://www.bitsight.com/blog/how-to-protect-your-organization-from-the-emerging-deepfake-threat
  58. https://ambervideo.co
  59. https://ai.facebook.com/blog/deepfake-detection-challenge/
  60. https://ai.facebook.com/datasets/dfdc/
  61. https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html
  62. https://www.blog.google/outreach-initiatives/google-news-initiative/advancing-research-fake-audio-detection/
Personal tools