What is the Truth Anyway?

From New Media Business Blog

Jump to: navigation, search


Introduction to Deepfakes

Deepfakes are a form of artificial intelligence that alters data to create convincing texts, videos, images, and real-time, and audio records of events that did not take place[1]. Although this technology is modern, the idea of manipulating media existed for a long time. For instance, video manipulation existed in the 90s as a concept known as ‘Video ReWrite’ and was mainly used in movies to create footage of a person mouthing words they did not speak, it relied on data to reorder the mouth images and formulate the end video used in movies[2]. Deepfakes, similarly, use deep-learning methods to develop the final product, the most common methods include autoencoders and Generative Adversarial Networks (GAN)[3], it is essential to note that both methods require various resources, time, and strong computational power.

Autoencoders Explanation[4]


Autoencoders are neural networks that focus on reducing the dimensionality of a data input and attempt to reconstruct the input into its original form. Autoencoders have 3 elements: The encoder, code, and decoder. The encoder takes the data input and reduces the dimensionality into the latent space which holds the code and identifies all essential features of the code while disregarding the inessential features. Then, the code is sent to the decoder which, in turn, attempts to reconstruct the data into its original form. Through repeating this process multiple times, the autoencoders learn more about the essential features and how to modify changes accordingly.

Generative Adversarial Networks (GAN)

Generative Adversarial Networks (GANs) composes of two separate neural networks: the generator and the discriminator. The original data is sent to both networks. Both networks compete to develop a deepfake. The generator develops samples of manipulated data to fool the discriminator into believing that the sample is the original data input. Through repetition of this process, both networks learn about the important features of the data over time, and it becomes harder to fool the discriminator. Eventually, the generator develops a sample that fools the discriminator which results in the development of the final product.

Types of Deepfakes

Deepfakes come in the form of images, videos, texts, audio, and real-time implications. Videos and audio deepfakes are widely used to alter parts in movies to keep a consistent flow whereas deepfake images are widely used in the internet community to create sarcastic materials[5]. In this section, we will take a deeper look into these types of deepfakes:

Deepfake Images

Deepfakes images are widely used on social media to create content for viewers. Deepfake makers can create blog posts and stories showcasing a lifestyle of a person that does not even exist, which, at times, is used for marketing purposes. A person by the name of Maisy Kinsley was showcased as a Bloomberg reporter and had a strong presence on Twitter and LinkedIn, it was later discovered that this person is not real and was created for financial gain purposes[6].

Deepfake Videos

Deepfake videos are the mostly used type of all deepfakes. It relies on raw data and attempts to learn important features for replication. In 2020, a Korean-based company named Hyperconnect released software called MarioNETte, the software can create deepfake videos of public and historical figures by having a person replicate the targeted figure’s facial behavior and that data is then placed on the original figures’ features to help the deep-learning algorithm learn the difference and replicate the changes onto the historical or public figure[7].

Deepfake Texts

Deepfake texts are AI-generated texts that intend to duplicate a person’s style of writing[8]. For instance, through having multiple texts written by a certain author, the deep-learning algorithm can learn about the different ways the author constructs their sentence and their word choice and attempts to create new content to seem as if it was written by the targeted author. Additionally, another use of textual deepfakes comes from groups pursuing a particular idea or pushing to have society form an opinion on a topic, usually politically based. Textual deepfakes are easy to create in bulks and to create different sentences and paragraphs that support one idea, the setback of this arises as textual deepfakes are harder to detect compared to other forms of deepfakes[9].

Deepfake Audio

A human voice can be replicated into a deepfake audio to make the target person say things they never said. To do that, a recording containing their voice is needed to help deep-learning algorithms clone it[10]. The voice of the targeted individual becomes more accurate to create if more data containing their voice is available.

Real-Time Deepfakes

Today, fraudsters can recreate deepfake audio to pass voice authentications, and employees could create real-time videos to showcase themselves available during online meetings when they are not there. DeepFaceLive software can convert your visual into another person’s streaming network[11].

Puffer Coat Pope[12]

Viral Examples of Deepfake Technology

AI-Created Song Based on the Weeknd and Drake Samples

A viral AI-created song called “Heart on my Sleeve” featured vocals of Drake and The Weeknd as well as Metro Boomin's signature producer tag. It went viral in April 2023, racking up millions of views. However, Drake and The Weeknd’s teams at Universal Music Group applied legal pressure to take the song down[13].

Picture of the Pope Wearing a Puffer Coat

Another example of an AI-generated image of the Pope wearing a puffer coat went viral in March 2023 and fooled many into believing that it was real. It was revealed that the photo was not real and rather generated by AI[14].

Timeline and Prominent Events in Deepfake's History

Invention of GAN by Ian Goodfellow in 2014

Deepfake technology's origins can be traced back to 2014 when Ian Goodfellow invented the Generative Adversarial Network (GAN). This development allowed for the creation of deepfakes by enabling the training of AI to generate highly realistic and convincing media[15].

Free and Wide Access to Deepfake Technology on Reddit in November 2017

The turning point for deepfake technology came in November 2017 when an anonymous Reddit user shared the algorithms that allowed for free and wide access to deepfake technology. The posting of the algorithm marked a significant moment as it made it accessible to a wider audience[16].

Rise of Deepfake Apps (e.g., FaceApp) in 2019

Thus, numerous apps utilizing deepfake technology emerged. One notable example is FaceApp, which gained popularity in 2019 due to its AI-based age-changing filter. This viral trend showcased the potential of deepfake technology to alter and manipulate images.

Controversial Videos in 2020

In 2020, deepfake videos took the internet by storm when manipulated footage of prominent figures talking about democracy surfaced. However, it was later revealed that these videos were created by a nonpartisan advocacy group, RepresentUS, in the United States. They created this public service campaign to highlight the risk of deepfakes on democracy. This incident displayed the technology's capacity to generate compelling and deceptive content, initiating discussions about its impact on public perception and trust in media.

2020 was a turning point for deepfakes to be utilized across various industries and a wider population[17].

Impact of Deepfake Videos During the Turkish and US Elections

During the Turkish elections in 2023, a concerning incident emerged where a video of a candidate speaking fluent English went viral[18]. It was later revealed to be a deepfake video, raising alarm about how deepfake technology can provide false information to skew perspectives in politics. Experts predict that it could continue to impact politics during the 2024 US elections[19]. This situation underscores the urgent need to implement legislation against deepfake videos before they cause significant harm to the democratic process[20].

Social Media Response to Deepfakes

Actions are taken by Twitter in October 2020, where the platform began labeling and warning users about deepfake content as "manipulated" and "may not be real". The site even removes content that is severely altered or has an increased ability to misinform[21]. This step represents one of the initial measures taken by a social media platform to address the spread of deceptive media[22].

Facebook's policies regarding deepfakes target visibly altered videos or pictures. However, their policies (which were implemented in 2020) fail to manage and take down the majority of videos that misinform the public[23]. Media that is not visibly altered is allowed to remain on the platform. Furthermore, media that is mislabeled, out of context, or simply a "cheap fake" (cheaply made deepfake) is not taken down[24].

Scams and Frauds Associated with Deepfake Technology

Types of Scams (Financial, Identity Theft, Political)

Deepfake technology has allowed for various types of scams: financial, identity, and political to increase[25]. As technology advances, criminals have found innovative ways to exploit it for their malicious purposes. The following are some prominent scam types associated with deepfake technology:

1. Financial Scams: Liveness Bypass: This method involves criminals swapping or editing biometric data to bypass security measures. By using deepfake technology, fraudsters can manipulate facial recognition systems or other biometric authentication methods to gain unauthorized access to financial accounts and commit fraud. In 2022, liveness bypass accounted for 34% of fraud cases in the United States and 22% in Canada[26].

2. Identity Theft: Edited or Forged ID Card: Deepfake technology enables the alteration of identification documents, such as driver's licenses or passports. In 2022, edited ID cards contributed to 22% of fraud cases in the U.S. and 24% in Canada. Forged ID cards can be used for various fraudulent activities, including accessing restricted areas or committing financial crimes. In 2022, forged ID cards accounted for 13% of fraud cases in the U.S. and 18% in Canada[27].

3. Political Scams: Deepfake's potential to create convincing fake videos of political figures has raised concerns about its impact on public perception. Altered political videos can spread false information, influence public opinion, and create discord during elections.

In North America, the proportion of deepfake-related incidents more than doubled from 2022 to the first quarter of 2023, indicating a growing trend of criminals exploiting this technology. The rise of deepfake scams poses significant challenges for law enforcement, businesses, and individuals[28].

Case Study: CEO Impersonation Leading to Fraudulent Transfer

In a case involving deepfake technology, criminals utilized artificial intelligence-based software to impersonate a chief executive's voice, leading to a fraudulent transfer of €220,000 ($243,000) in March[29]. This incident raised concerns about the potential risks of deepfake technology.

The CEO of a U.K.-based energy firm received a call that he believed was from his boss, the chief executive of the company's German parent organization. The impersonator instructed the CEO to urgently transfer the funds to a supposed Hungarian supplier, stressing that the transaction must be completed within an hour. Trusting the authenticity of the voice on the other end of the call, the CEO complied and initiated the transfer.

This case serves as an example of how deepfake technology can have profound consequences on businesses, surpassing what was previously known about its potential impact. The criminals behind this intricate scheme successfully manipulated the CEO's perception of the situation, exploiting the trust and authority associated with the voice of a high-ranking executive. The incident not only resulted in significant financial loss for the energy firm but also highlighted the need for increased awareness and preparedness against such sophisticated cyber-attacks.

Criminals now utilize publicly available voice clips or samples to alter what individuals are saying. With access to just a few audio recordings, attackers can create deceptive voice replicas that convincingly simulate real individuals, leading to further misuse of this technology.

Despite the CEO growing suspicious when the scammers demanded a second transfer, the criminals managed to evade the law and never were caught[30]. This highlights the challenges law enforcement faces in dealing with deepfake crimes, as the technology enables anonymity and sophisticated obfuscation of the culprits' identities.

This incident shows the need for increased vigilance and precautionary measures to protect against deepfake attacks. Businesses and individuals need to be cautious when handling sensitive financial transactions and should adopt robust authentication protocols to verify identities effectively. Furthermore, raising awareness about the risks of deepfake technology and investing in up-to-date cybersecurity measures can help mitigate the potential harm posed by such cyber-attacks. Staying ahead of these threats and developing effective strategies to counter deepfake threats is important in ensuring digital security.

Deepfake Misuse in China and New Legislation

AI Face-Swapping Technology for Financial Scams

In China, the use of AI face-swapping technology is witnessing a surge in financial scams. Criminals use this technology to manipulate images, swapping their faces with those of friends and family of the victim, and then demand fraudulent transfers[31].

Introduction of New Laws in January 2023 to Combat Misinformation

In response to these alarming developments, authorities introduced new laws in January 2023. These laws were created to hold service providers and users accountable for producing and spreading false information[32]. These legislative measures aim to decrease the misuse of AI-based technologies for deceptive purposes and highlight the growing need for regulations to safeguard against emerging threats posed by deepfake technology and the potential for misinformation.

Regulation and Legal Responses to Deepfakes

As previously discussed, Twitter stands out as the only social media platform taking tangible steps to address misinformation, including deepfakes. However, the European Union has issued a warning to tech giants of Alphabet, Meta, and Twitter[33]. The EU emphasized that these companies must act to prevent the spread of deepfakes and misinformation, or they would face fines as a consequence of their inaction[34].

While fines alone may not be a significant deterrent for big tech companies, the warning signals a growing recognition of the dire need for regulation. It represents a small step towards holding platforms accountable for the content they host and the potential harm that can arise from the manipulation of media using AI-driven tools.

US States Regulating Deepfakes[35]

As the technology continues to advance and the potential for deepfakes to be misused becomes more apparent; tech companies and regulatory authorities must collaborate on strategies for identifying, combating, and preventing the harmful effects of deepfake technology. The EU's warning is a call to action for the tech industry to step up in developing detection tools and implementing transparency measures to safeguard platforms from the threats posed by manipulated content.

Deepfake Laws in the USA

In the United States, several states have begun to take steps to regulate deepfakes. However, it is crucial to highlight that most of the regulations focus on when deepfakes are used in a harmful manner: creating explicit or threatening material. While these laws aim to protect individuals from potential harm caused by the misuse of deepfake technology, they have sparked discussions surrounding the implications of freedom of speech. Some individuals have expressed concerns that such regulations may infringe upon their first amendment rights.

As of June 16, 2023, several states, including Louisiana, Illinois, Massachusetts, and New Jersey, have proposed deepfake laws. These states are actively working towards passing and enacting regulations that address the challenges posed by the misuse of deepfake technology[36]. These legislative efforts reflect the growing recognition of the need to find a balance between protecting individuals from harm and preserving the fundamental rights of free expression. By creating and implementing clear guidelines and measures, these states will strive to make sure that deepfake technology is used responsibly and ethically.

Benefits of GAN


Entertainment such as AI-generated music, art, and videos are some of the common uses of GAN technology consumed by the general public. It has transformed the entertainment industry by giving creators a new tool to create content that was once very difficult or impossible to create through traditional means, such as substituting an actor from one show to another or making a celebrity sing a song in a foreign language[37].


GAN can be used to create immersive, hands-on experiences for educational, training, and research purposes through the generation of simulated worlds. Currently, GANs are being used to recreate crime scenes to better understand how perpetrators got away, generate realistic MRI images to train doctors without violating patient privacy concerns and recreate historical scenes and figures for students to interact with[38][39]. An example of interacting with the past comes from a company by the name of MyHeritage, a genealogy platform. MyHeritage developed a technology called Deep Nostalgia that uses GANs to colourize and animate old family pictures to give individuals a new way to interact with their family history by giving some insight into how their ancestors would have acted if they were alive[40].

Identity Protection

For identity protection, GANs can be used to alter an individual's characteristics, from applying minor visual and audible filters to completely replacing their face with an AI-generated avatar. GANs can also be used to generate fake accounts using fake personal data to prevent hackers from stealing actual personal information[41].

Tanget.AI's Advertisement to Businesses[42]


Hyper-personalization is a relatively new method of advertising that is being used by marketers to provide more relevant messaging to their audiences based on their personal preferences and reach broader markets. The way this is used in the cosmetic industry is by altering the skin tone or ethnicity of models to better visualize products on that skin tone. Using GAN, hyper-personalization can be achieved at a much lower cost when compared to traditional marketing, as a company could hire only one model and alter their appearance into many different models [43]. Tangent.AI is a company that uses AI to help businesses improve their marketing campaigns, and they released an advertisement to show businesses the benefits of deepfaked models[44].


Social Engineering

Impersonation and defamation are common negative consequences of deepfakes. It can cause reputational damage and spread misinformation about things that one has never said or done. Hackers use deepfakes to conduct social engineering, the act of impersonating a trusted individual to gain access to private information, resulting in scams and hacks[45]. An example of how deepfake is used for social engineering is the rise of fake LinkedIn accounts. These accounts typically contain fake profile pictures, job titles, companies, and work experience. The idea of these fake LinkedIn accounts is to connect with people in high positions, such as security and tech executives, and bait them into clicking links that contain malware to gain control of their systems[46].

Denial of Evidence

A rising concern for deepfakes is that in the future, technology may get so advanced that it becomes impossible to distinguish between a deepfake and reality. This leads to the problem of people using deepfakes as a defense to allegations in court. In 2018, a fatal crash resulted in the death of Apple engineer Walter Huang. Walter’s family sued Tesla and wanted to use one of Elon Musk's (CEO and co-founder of Tesla) past statements regarding the safety of Tesla's autonomous vehicles as a deposition, an out-of-court oral testimony, as evidence against Tesla. In this statement, Musk states that "A Model S and Model X at this point can drive autonomously with greater safety than a person, right now". This statement was recorded and stated at the 2016 Code Conference in Los Angeles. Tesla’s lawyers state that Musk did not recall making that claim and that it could be a deepfake, as he is a popular target for deepfakes. After some back and forth, the judge denied using deepfake as a viable defense for this case as it could set a precedent of allowing Musk and other celebrities to say whatever they want without consequence and ordered Musk's previous statement as a deposition [47].

Ethical Concerns

With the introduction of GAN to the public, many ethical concerns have risen. However, the most common ethical concerns currently revolve around the ownership of AI-generated art and the ability to commercialize it. The process of AI-generated art revolves around taking works from a collection of artists and splicing them together to create a unique piece of art. Oftentimes, these artistic contents do not credit the original artists. However, in the situation of creating new art pieces using artistic content from dead artists, the question that gets brought up often is who owns the voice, face, and art of the dead. It asks the question of whether or not artists using GAN have authority over their art pieces, as dead artists cannot give consent[48].

Technical Solutions

Artificial Intelligence

Although deepfakes are created using artificial intelligence, they can also be used to detect them. AI deepfake detectors are created using similar deep learning methods used by deepfakes to develop their detection capabilities[49]

One way that AI is used against deepfakes is by detecting technical nuances and inconsistencies not visible to the human eye. Microsoft Video Authenticator is an AI tool developed by Microsoft to calculate the probability that a photo or video has been artificially manipulated. This is done by searching for any potential blending boundaries that were used for blending in deepfakes and observing for any fading or greyscale elements. Once the content is analyzed, the tool will provide a confidence score that shows whether the content was manipulated. The tool was trained through FaceForensics++[50], a public dataset containing over 1.5 million manipulated images, and tested in the Deepfake Detection Challenge dataset[51] which contained over 100,000 manipulated clips[52].

A second way AI is used against deepfakes is by creating a catalog of proven deepfakes and digitally fingerprinting them to detect potential deepfakes[53]. This is the process for Operation Minerva. Operation Minerva is a program led by a copyrighting firm called Takedown Piracy, and it collects previously discovered deepfakes to use as references to see if a new piece of content is a modification of an existing deepfake in the catalog[54]. If the piece of content is found to be in the catalog, it would be flagged as a deepfake and digitally fingerprinted to be used in future deepfake detection[55].

A third way AI is used against deepfakes is by using biometrics to determine whether a potential deepfake portrays authentic human characteristics such as eye movements. Intel's FakeCatcher is an AI tool that uses biometrics to determine whether a video is deepfaked by using photoplethysmography (PPG), a technique that involves observing the change of colour in an individual's facial veins that occurs when a heart pumps. FakeCatcher maps the individual's PPG signals across their face and determines if the individual is real or fake using deep learning algorithms[56].

Simplified View of a Blockchain[57].


In simple terms, a blockchain is a decentralized, immutable ledger that records information that can be verified and reverified by all users[58].

Decentralized: The blockchain is not controlled by a single person or group; instead, it is controlled by the collective users[59]

Immutable: Once information is entered into the blockchain, it is irreversible[60].  

Hashing Algorithm[61]

A blockchain consists of blocks that are made of 3 components: data, hash, and previous hash[62].

Data: Information stored in the block, such as sender information, timestamp, file type, hash, and previous hash[63].

Hash: A form of unique identification created using an algorithm containing a combination of letters and numbers to identify the block. Essentially, a hash acts as a fingerprint[64].

Previous Hash: The hash from the block prior that links and identifies the connection between the previous block’s hash and the current block’s hash, thus creating a chain. If the current block's hash and the previous block's hash do not match, the current block will be rejected from the blockchain[65].

By uploading a file to the blockchain, a hash would be automatically created for that file through an algorithm. The first block of the blockchain is called a Genesis block and would contain data and a hash, but not a previous hash as it is the first block in the blockchain, meaning there are no other blocks before the Genesis block to refer to[66]. Using the unique hash of the file stored in the Genesis block, whenever the file gets distributed, those who want to verify the authenticity of the file can reproduce the hash of the received file and compare it to the original hash stored in the Genesis block. The immutable nature of the blockchain ensures that the original hash is never changed. Therefore, If both hashes are identical, then it proves that there have been no alterations to the received file. If the received file’s hash is not identical to the original hash, then that means the received file has been tampered with. Using the blockchain, users would be able to find where the tampering happened, identify who tampered with it, reject it, and track the reputation of content creators[67].

Final Remarks

The use of deepfake technology is on the rise, and at this point, legislators must create clear criteria as to who should have access to it, how deepfake material should be used for court cases, and general laws regarding the creation and use of deepfakes. We can not overlook the enhancements that deepfakes could provide for education, marketing, and entertainment purposes. However, these enhancements come at the risk of privacy, security, and ethical concerns. Nevertheless, as is the case with other technologies, as the use of the technology increases, imposing a direct threat to individuals, technologists, and researchers will find ways to ease detection and limit the threats imposed. Fortunately, the threats of deepfakes do not apply to individuals in the general society as much as it threatens public figures, which lowers the population that could be negatively impacted by deepfakes. However, we must always be aware that not everything we see online is real, and there are always two sides to every story.


Hana Abid Mohamed ElGamal Jamie Yu
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada


  1. https://www.merriam-webster.com/dictionary/deepfake
  2. http://chris.bregler.com/videorewrite/
  3. https://recfaces.com/articles/what-is-deepfake#3/
  4. https://www.v7labs.com/blog/autoencoders-guide
  5. https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-deepfake/ /
  6. https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-deepfake/ /
  7. https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-deepfake//
  8. https://www.europarl.europa.eu/RegData/etudes/STUD/2021/690039/EPRS_STU(2021)690039_EN.pdf//
  9. https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all//
  10. https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-deepfake/ /
  11. https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-deepfake/ /
  12. https://www.theguardian.com/commentisfree/2023/mar/27/pope-coat-ai-image-baby-boomers
  13. https://www.nytimes.com/2023/04/19/arts/music/ai-drake-the-weeknd-fake.html
  14. https://www.theguardian.com/commentisfree/2023/mar/27/pope-coat-ai-image-baby-boomers
  15. https://www.technologyreview.com/2018/02/21/145289/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination/
  16. https://www.scientificamerican.com/article/detecting-deepfakes1/
  17. https://www.technologyreview.com/2020/12/24/1015380/best-ai-deepfakes-of-2020/
  18. https://www.euronews.com/next/2023/05/12/ai-content-deepfakes-meddling-in-turkey-elections-experts-warn-its-just-the-beginning
  19. https://www.wired.com/story/chatgpt-generative-ai-deepfake-2024-us-presidential-election/
  20. https://www.brookings.edu/articles/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/
  21. https://help.twitter.com/en/rules-and-policies/manipulated-media
  22. https://www.vox.com/recode/2020/2/4/21122653/twitter-policy-deepfakes-nancy-pelosi-biden-trump
  23. https://rollcall.com/2019/06/13/social-media-should-be-accountable-for-deepfake-content-intelligence-experts-say/M
  24. https://qz.com/1781809/facebooks-deepfake-ban-wont-remove-most-visual-misinformation
  25. https://www.businesswire.com/news/home/20230427005427/en/One-Third-of-Global-Businesses-Already-Hit-by-Voice-and-Video-Deepfake-Fraud
  26. https://sumsub.com/newsroom/new-north-america-fraud-statistics-forced-verification-and-ai-deepfake-cases-multiply-at-alarming-rates/
  27. https://sumsub.com/newsroom/new-north-america-fraud-statistics-forced-verification-and-ai-deepfake-cases-multiply-at-alarming-rates/
  28. https://sumsub.com/newsroom/new-north-america-fraud-statistics-forced-verification-and-ai-deepfake-cases-multiply-at-alarming-rates/
  29. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
  30. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
  31. https://www.reuters.com/technology/deepfake-scam-china-fans-worries-over-ai-driven-fraud-2023-05-22/
  32. https://www.reuters.com/technology/chinas-rules-deepfakes-take-effect-jan-10-2022-12-12/
  33. https://www.reuters.com/technology/google-facebook-twitter-will-have-tackle-deepfakes-or-risk-eu-fines-sources-2022-06-13/
  34. https://eucrim.eu/news/europol-report-criminal-use-of-deepfake-technology/
  35. https://www.bloomberg.com/news/articles/2023-06-20/deepfake-porn-political-ads-push-states-to-curb-rampant-ai-use#xj4y7vzkg
  36. https://www.bloomberg.com/news/articles/2023-06-20/deepfake-porn-political-ads-push-states-to-curb-rampant-ai-use
  37. https://q5id.com/blog/a-fundamental-guide-to-deepfakes-how-it-works-use-cases-and-more
  38. https://q5id.com/blog/a-fundamental-guide-to-deepfakes-how-it-works-use-cases-and-more
  39. https://developer.nvidia.com/blog/ai-can-generate-synthetic-mris-to-advance-medical-research/
  40. https://www.myheritage.com/deep-nostalgia
  41. https://q5id.com/blog/a-fundamental-guide-to-deepfakes-how-it-works-use-cases-and-more
  42. https://www.pocket-lint.com/apps/news/150275-advertising-deep-fake/
  43. https://www.intelligencygroup.com/blog/digital-roundup-24-6-22/
  44. https://www.pocket-lint.com/apps/news/150275-advertising-deep-fake/
  45. https://q5id.com/blog/a-fundamental-guide-to-deepfakes-how-it-works-use-cases-and-more
  46. https://www.amperesec.com/newsarchive/deepfakes-in-your-linkedin
  47. https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-are-not-amused
  48. https://www.orfonline.org/expert-speak/debating-the-ethics-of-deepfakes/
  49. https://www.nutanix.com/theforecastbynutanix/technology/what-data-scientists-are-doing-to-detect-deepfakes#:~:text=In%20fact%2C%20organizations%20already%20are,videos%2096%25%20of%20the%20time.
  50. https://www.arxiv-vanity.com/papers/1901.08971/
  51. https://www.arxiv-vanity.com/papers/2006.07397/
  52. https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/
  53. https://www.techtarget.com/whatis/definition/deepfake
  54. https://www.prweb.com/releases/operation_minerva_rolls_out_to_combat_deepfakes_and_revenge_porn/prweb16107117.htm
  55. https://www.techtarget.com/whatis/definition/deepfake
  56. https://www.youtube.com/watch?v=WYjJM49559I
  57. https://youtu.be/SSo_EIwHSd4
  58. https://www.weforum.org/agenda/2021/10/how-blockchain-can-help-combat-threat-of-deepfakes/
  59. https://www.investopedia.com/terms/b/blockchain.asp#:~:text=In%20Bitcoin's%20case%2C%20blockchain%20is,recorded%20and%20viewable%20to%20anyone.
  60. https://www.investopedia.com/terms/b/blockchain.asp#:~:text=In%20Bitcoin's%20case%2C%20blockchain%20is,recorded%20and%20viewable%20to%20anyone.
  61. https://www.weforum.org/agenda/2021/10/how-blockchain-can-help-combat-threat-of-deepfakes/
  62. https://youtu.be/SSo_EIwHSd4
  63. https://youtu.be/SSo_EIwHSd4
  64. https://www.weforum.org/agenda/2021/10/how-blockchain-can-help-combat-threat-of-deepfakes/
  65. https://youtu.be/SSo_EIwHSd4
  66. https://youtu.be/SSo_EIwHSd4
  67. https://medium.com/coreledger/solving-the-deepfake-problem-proving-the-authenticity-of-digital-artifacts-with-blockchain-de4474c8fba
Personal tools