Fake News in 2020

From New Media Business Blog

Jump to: navigation, search

Contents

Introduction

Fake News

The term "fake news" became legitimized during the 2016 election [1]

Fake news is amplified, inaccurate, or false information, which can cause real-world impacts. During the 2016 U.S. presidential election, false information was circulated by individuals motivated to foster political causes or obtain revenue out of online traffic, causing fake news to become a major problem. It has become difficult for people to differentiate between speculation, opinion, and facts on various platforms that are used every day, such as Facebook[2]. Currently, there are many fake news websites that have domain names similar to reputable news sources to attract more readers and gain traction on their websites. In addition, some sites are developed to produce satire news, such as The Onion, in which readers must cross-reference their facts and must think critically when reading the content, no matter how believable it may seem.

Impact

Share of adults who use social media as a source of news in selected countries worldwide as of February 2020 [3].

Based on data found on Statista, less than 35% of adults in Europe considered social network news to be trustworthy, yet more than 50% of all adults in European countries get their news from social media[4]. Although one may be aware of the potential risk of fake news on social media channels, they may still believe the content by not cross-checking the facts. In addition, people believe fake news is prevalent in the following media sources in order of most prevalent to least prevalent: online news websites and platforms, newspapers/magazines, and TV/radio[5]. During the COVID-19 pandemic, many adults have gotten their updates on coronavirus from major news organizations and social media worldwide. Many people constantly come in contact with fake news through reading online media sources, which have a higher risk of false information sharing and spread, resulting in the difficulty of differentiating between true or false facts.

Share of adults worldwide who believe fake news is prevalent in selected media sources as of February 2019 [6]

Concerns

Fake news is a concern for society as a whole because of the difficulty of distinguishing between fake and real news, as false information can be integrated into true articles and has become more prevalent in online sources, potentially misleading the reader. Through false information prevalence, people’s perception of reality can be manipulated by the media sources and can distract them from the real issues. For example, a fake tweet from Trump may shift the focus of real political issues on healthcare to a fake tweet. Also, fake news spreads quickly on social media platforms and it can be difficult to control the spread, especially when something of interest to the population is shared across multiple communication channels and platforms. Although Facebook and Twitter are the most well known social media platforms where fake news can spread, there are risks of false information spreading over platforms such as, Instagram and What’s App, developing suspicion and conspiracy[7].

Cognitive Biases

There are 4 main cognitive biases that are associated with fake news. Firstly, many people tend to read headlines and tags without reading articles. If a person does not read the full article, they will not be able to identify the red flags revealing that the article is fake. Usually, there is a lack of consistency between the title and the content of the articles or potential grammar and spelling errors that will make it obvious or clearer that the news article is fake. Also, after reading a catchy title, people tend to share it with friends and family that the article would be relevant to, increasing the number of people that come in contact with the false information. Secondly, social media contains information with greater acceptance. As information quickly spreads on these platforms, people come in contact with the content more frequently believing that it's true without checking their facts. Thirdly, fake news articles tend to take advantage of the most common political mental shortcuts. For example, if a left-wing political view individual reads an article that discusses some negative situations or factors related to the right-wing political view, the left-wing political view individual is more likely to believe it, and vice versa. People that have strong political views are more easily manipulated by fake political news. Fourthly, there is a tendency for false information to stick around, because it takes as significant amount of time for reputable news sources to gain the facts necessary to prove that an article is discussing false information[8].

Generation

Human

Fake news can be generated by humans. The reason why people create fake news is to generate some easy revenue at a low cost. One can identify publishing fake news as an independent business opportunity. The individual will create a URL that is usually similar to a reputable news company and copy content from other sites but add a catchy title that can attract more people to their site. The individual will have companies purchase ads on their page as their main revenue generator. They are not concerned about the content produced; they are focused on producing content that will get them the most amount of clicks on their website[9].

Artificial Intelligence (AI)

Fake news can be generated by artificial intelligence (AI). Natural Language Processing (NLP), a branch of AI dealing with the interaction between computers and humans using the natural language, is the main producer of this type of fake news generation. NLP can read, decipher, understand, and make sense of the natural language, giving it the ability to generate content through prediction of what could come next based on article titles or some general information[10]. For example, Generative Pretrained Transformer (GPT-2) and Grover can generate fake news. Many of the fake news AI generators can also detect AI-generated fake news[11].

Spread

Humans

Humans are one of the main contributors to the spread of fake news. The first case is that people sometimes are not aware that they are interacting with false information. If the fake news article, image, or general content has a direct connection to them, their friends or family, they are likely to share that content with these people. The second case is when people spread fake news maliciously. These individuals are usually the people who also produce fake news and/or want to impact political views. They first create fake accounts on various social media platforms and join popular groups to post and share content on, hoping to gain the most traction. Also, they will follow popular people to post their views or articles through comments on the popular person's pictures or posts, increasing the number of people exposed to the content[12].

Bots

Bots are programmes that automatically do tasks and can be good or bad, depending on the task they are required to perform. Usually, bots create buzz around certain people, issues, or topics. These bots will mirror content based on popularity, potentially using a hashtag to identify trends. The bots will post comments under popular posts that may exploit certain political views, social perceptions, etc [13]. They will inflate the popularity of false information on social media platforms, causing more people to come in contact with it. Also, they have the ability to manage automated campaigns that are controlled by humans[14].

Examples

Russia’s Agenda

Russia has a history of cybersecurity hacking, fraud, and scams. Without a doubt, Russia plays a large role in how Fake News was quickly spread due to the pandemic. More specifically, Russia has engaged in stirring uncertainty within the EU as COVID started to ramp up in Spain and Italy. Research from the Oxford Internet Institute shows that COVID-19 misinformation and disinformation increased from sources such as foreign governments[15]. Statistics show that 92% of misinformation from state-backed governments originates from Russia and China (CBC). From a political standpoint, Russia’s agenda stems from the basis of spreading fear and misinformation to other countries to cause confusion surrounding COVID-19 prevention, treatments and its effects. Some headline examples claim that handwashing does not help prevent the spread of COVID-19 or smokers being unaffected by the virus. This has major health implications as people are turning to news outlets for sources of reliable information.

China’s Response to Covid-19

With news articles riddled on the Seafood market where COVID-19 started from, China has played a large role in mitigating this image using disinformation tactics to spin articles towards optimism, elevate the country’s image, and cause confusion about the virus’ origin. Articles and videos were circulated of hospital staff dancing with the patients to remedy the dire situation in the retrofitted, makeshift arenas that the Chinese government had built. In addition to this, a video of a nurse dragging body bags with deceased individuals was leaked onto the internet. The Chinese government declared this video as false and detained those who spread the video. Within China, state-owned media companies have the ability to censor and tailor content to their discretion. As such, Chinese media networks portrayed very positive images and videos of nurses, showing happiness throughout the pandemic in their footage on the floor despite the leaked videos of past patients describing the poor treatment in these makeshift hospitals. Lastly, the origin of the virus has been a major controversy. As the first few cases arose in the US, the Chinese government took this opportunity to spin the news in its media English translation to the “first case of COVID-19 originated in the US” [16]. China’s agenda stems from their national sovereignty in how they manage public perception to ensure that messaging remains in a positive light. This acts as a major concern as Chinese citizens may not be receiving accurate information, as highlighted through state-owned media companies.

Trump

COVID-19

Since the beginning of the pandemic, President Trump has bombarded Americans of false and misleading claims about COVID-19. Many of his claims have been around the high rates of testing he has implemented across the country and the measures he is taking to boost the U.S. economy. His tweets regarding the pandemic have raised concerns of Americans. Trump's most recent tweets were claims in regards to the pandemic being under control in the U.S. At the time Trump made the tweet, the country’s daily cases doubled to around 50,000, a higher daily case count than seen at the beginning of the pandemic[17]. His overly optimistic news has given American citizens the idea that going back to life pre-pandemic is simplistic and foreseeable in the near future.

Mail-in-voting

As the U.S. Presidential elections are approaching, Trump is using his Twitter platform with over 82 million followers to voice his views on mail-in voting[18]. Trump tweeted that voting by mail is “substantially fraudulent” and would lead to a “rigged” election this coming November. Although Trump has voiced his thoughts on mail-in voting previously, Twitter has now planted a small exclamation mark beneath two of Trump’s tweets about voting and urges his followers to “get the facts about mail-in ballot”, via links to news stories[19]. The news stories point out that, though fraudulent mail-in voting does occur, it has taken place on a tiny scale in comparison to the number of votes that have been mailed. Twitter explained that their primary intention with flagging the tweet is to avoid confusing voters, advising them to research for themselves. Due to COVId-19, mail-in voting would be used substantially across the country. With the statements Trump has made, there is a great concern that voters will be reluctant to use this method and avoid voting altogether. This not only poses a concern for the number of voters this coming election, but the risk it potentially has for voters to vote in person.
Twitter adds warning labels to Trump tweets that made misleading claims about mail-in voting.[20]

Executive order

Just days after Twitter flagged his tweets, President Trump signed an executive order targeting Twitter and other social media companies. Trump said the move was to “defend free speech from one of the gravest dangers it has faced in American history”[21]. The order targets Section 230 of the Communications Decency Act, which confers immunity to internet companies for the content they host but is generated by their users. Twitter, for example, would be held accountable for every tweet on their platform. Social media platforms have immense power to shade the interpretation of public events. If passed, this executive order has the potential to not only impact social media companies, but every website, app, or service where users gather online.

Facebook

Facebook has announced the steps the organization is taking to fight the spread of fake news on its platform. They ensured their users that they intend on having the most accurate information on Facebook by targeting three key areas. They are focused on disrupting the economic incentives for traffickers of misinformation, investing in products to detect fake news, and helping their users to make more informed decisions when they encounter fake news[22]. However, Facebook CEO Mark Zuckerberg has publicly defended the company’s decision to not take down political advertisements that contain false information. Zuckerberg believes that private companies should not censor politicians or news as people should see for themselves all news and create their own judgements[23]. Zuckerberg’s strong opinions have impacted his company significantly. Weeks from his announcement, 200 Facebook employees wrote letters asking the CEO to reconsider. 36 employees have also left the organization in protest of Zuckerberg's behaviour. This has also impacted users of Facebook as they are hesitant to use the platform with the risk of being exposed to false information[24].

Covid-19 Scams

The rise of scams has emerged as people are more vulnerable than ever. Various types of scams have been brought to light due to COVID. Three main scams include: email phishing, mail-in letters, and phone scams impersonating governmental agencies to steal money and information. An example of this is financial fraud, where con artists impersonate financial institutions to ask for personal information, such as social insurance and credit card numbers[25]. Oftentimes, these scams are conducted through email phishing and text messages sent from local numbers. The Albertan Health Service reported that con artists are calling victims claiming they have tested positive for COVID, asking for credit card and other personal information[26].

This digitally manipulated image was published on the Fox News homepage on June 12 with stories about Seattle. The image combines scenes from two different June 10 photographs from Capitol Hill. [27].

Black Lives Matter

The rise of the Black Lives Matter (BLM) Movement has instigated waves of false information in the form of false tweets sent from twitter accounts, photos taken out of context, and doctored images that create confusion and terror. A video montage from the Official White House account accused an anti-fascist group of instilling violence by showing pictures of bricks and weapons. The montage was debunked as fake news as these brick piles were often found at construction sites across the city. Secondly, a video circulated on the internet as a policeman seemed to be smashing his own police car to pin the blame on the protestors and depict them as looters. However, the same police car was also damaged by a group of protestors already, indicating that this photo was taken out of context[28]. Lastly, the use of doctored images has been on the rise, where two images are used together. Photoshop and other software editing tools have allowed this to be more seamless than ever. Even established news agencies are prone to be misled by doctored images. A Fox News article had used an edited picture of an armed guard at a peaceful protest which was created using two different images. This not only demonstrates the ability for false news to become amplified but makes it harder for people to decipher real from fake, coming from large news agencies like Fox News[29].

Prevention and Detection Methods

People can verify information in various ways. This chart shows examples of steps users can take to verify the information they read. [30].

Prevention Methods

To prevent the spread of fake news there are human interventions and algorithms that can be used. For human intervention, people can verify information through the International Fact-Checking Network (IFCN) to flag fake news[31]. In addition, people should take their own measures when reading articles. They can consider looking at the source, reading beyond just the title, checking the author's credibility and the date of the article, identifying if there are supporting sources, asking if its a joke, checking their own biases, and asking experts[32]. On the other hand, algorithms are being developed to attach to an individual’s browser to help identify fake news. One case study surrounding developing an algorithm to identify false information is from Princeton University. Four students created a browser extension that detects fake news for a hackathon, which checks personal news feeds according to the system's checked authenticity. The searches with high confidence are summarized and shown to the user. For any link, the extension considers the website's reputation, while comparing it to malware and phishing websites database. For pictures on Twitter, the extension converts the image to text and checks the tweets of the usernames mentioned[33].

Detection Methods

Facebook’s Detection Method (Click-Gap)

The click gap algorithm compiles website links that are popular on Facebook. Then it reverses search results for the links on the broader web and will signal if the popularity proportion is not inline. It limits the reach of guilty links and will reduce or downgrade the reach of Facebook groups that share corresponding guilty links[34].

Giant Language model Test Room (GLTR)

GLTR identifies machine-generated text through a combination of statistical analysis and visualizations that are made from a given piece of text. When a text is analyzed by the model, GLTR will identify whether a word is likely to come next in a piece of text or if it is unexpected. GLTR has access to GPT-2, which will be further discussed in the next section. The model will highlight words in green if they are the top 10 most likely, yellow if the top 100 most likely, and red top 1000 most likely, and unexpected words will be highlighted purple. If there are many red and purple highlights in the text, GLTR will identify it as human generated, because of the unexpected text. If there are more green and red highlights in the text, GLTR will identify it as AI generated, due to the accurate prediction of the next word in the sequence. This model can open doors for people to generate fake reviews and content to influence the public perception surrounding a specific topic. The limitations of this model are that is has a limited scale and and won't automatically detect large-scale abuse [35].
GLTR identifies machine-generated text through a combination of statistical analysis. [36]

Generative Pertained Transformer 2 (GPT-2)

GPT-2 is programmed to predict the next word based on the previous words. The parameters are trained on a dataset of 8 million web pages. This model has the ability to adpat to the style and content, generating realistic continuations of the topic. When a piece of text is put into the GPT-2 model, it will predict what percentage of the piece of text is human or AI generated. Also, the model can generate fake news that replicates human quality in writing. This model has many social implications if used maliciously. For example, people will generate fake news articles, impersonate others online, automate production fake content on social media or automate production for spam and/or phishing[37]

Grover

The architecture of Grover is very similar to GPT-2 and has been trained on real datasets. Grover AI model detects fake news by first creating the false information itself. Since the model is good at producing false information, it is also good at detecting false information. The success rate of this model is 92% and classified if a given piece of text is fake or real using parameters. The parameters include the domain, date, authors, headline and body. Grover is one of the strongest predictors of AI-generated fake news[38].

Fact Extraction and Verification (FEVER)

The world’s largest fact-checking dataset. The model identifies whether a statement is true or false. Sentences with “yet so: or “did not” are usually considered false statements, revealing that the model has some biases. It was found that the model performed poorly on an unbiased evaluation dropping the accuracy from 86% to 58%. FEVER works well on both AI and human-generated fake news[39].

Future of Fake News

Facial reenactment can map pixels from faces in videos in order to transfer expressions. [40].

As of June 2020, Microsoft has announced that they will be implementing AI to replace 30 journalists. They decided to not employ people who research, edit, and choose news articles for one of their pages[41]. Since many fake news detection methods use AI to identify AI-generated news as it is likely fake, this detection method would no longer be as effective as companies shift from human to AI journalists.

A new technology that was first shown in 2016, facial reenactment allows for the real-time video of an actor presenting a video as a different person. This technology allows for fake news to be potentially spread by a famous celebrity or politician where they speak uncharacteristically. Although there is still much to be improved on the final product, the current results of facial reenactment can already fool many unknowing viewers[42].

Another technology that has the power to distribute Fake News is Vocal Synthesis (AI-based). A start-up launched in 2017 has made its end goal to create the most realistic artificial sounds, including the human voice. They are able to use a few minutes of audio from a subject on their software that leverages their deep learning algorithm to mimic that voice. If this technology falls in the wrong hands, they would be able to cause large scale social unrest, especially in conjunction with facial reenactment. [43].

Authors

Kevin Chung Prabhjot Gadri Nathan Lau Dionne Rodrigues
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada

References

  1. https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.cbsnews.com%2Fnews%2Fdonald-trump-election-facebook-fake-news-creator-paul-horner-claims-responsibility%2F&psig=AOvVaw0K41UtxkS4RA4FwGSr-KzH&ust=1595741989726000&source=images&cd=vfe&ved=0CA0QjhxqFwoTCLDGxbrY5-oCFQAAAAAdAAAAABAg
  2. https://www.sciencedirect.com/science/article/pii/S1877050917323086
  3. https://www-statista-com.proxy.lib.sfu.ca/statistics/1112026/fake-news-prevalence-attitudes-worldwide/
  4. https://www-statista-com.proxy.lib.sfu.ca/statistics/1112026/fake-news-prevalence-attitudes-worldwide/
  5. https://www-statista-com.proxy.lib.sfu.ca/statistics/422735/europe-trust-in-the-media-by-type/
  6. https://www-statista-com.proxy.lib.sfu.ca/statistics/1112026/fake-news-prevalence-attitudes-worldwide/
  7. https://www.sciencedirect.com/science/article/pii/S1877050917323086
  8. https://www.cits.ucsb.edu/fake-news/where
  9. https://www.cits.ucsb.edu/fake-news/where
  10. https://becominghuman.ai/a-simple-introduction-to-natural-language-processing-ea66a1747b32
  11. https://www.analyticsvidhya.com/blog/2019/12/detect-fight-neural-fake-news-nlp/
  12. https://www.sciencedirect.com/science/article/pii/S1877050917323086
  13. https://www.bbc.co.uk/bitesize/articles/zjhg47h
  14. https://www.cits.ucsb.edu/fake-news/where
  15. https://www.cbc.ca/news/politics/covid-coronavirus-russia-china-1.5583961
  16. https://www.nytimes.com/video/world/asia/100000007024807/china-coronavirus-propaganda.html?playlistId=video/coronavirus-news-update
  17. https://www.theatlantic.com/politics/archive/2020/07/trumps-lies-about-coronavirus/608647/
  18. https://www.newyorker.com/news/daily-comment/trumps-attack-on-voting-by-mail
  19. https://www.nytimes.com/2020/05/26/technology/twitter-trump-mail-in-ballots.html
  20. https://www.cnbc.com/2020/05/26/twitter-fact-checks-trump-slaps-warning-labels-on-his-tweets-about-mail-in-ballots.html
  21. https://www.cnn.com/2020/05/28/politics/read-social-media-executive-order/index.html
  22. https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news
  23. https://www.cbsnews.com/news/facebook-ceo-mark-zuckerberg-political-ads-people-should-judge-for-themselves-the-character-of-politicians/
  24. https://www.washingtonpost.com/technology/2020/06/05/facebook-zuckerberg-trump/
  25. https://www.canada.ca/en/financial-consumer-agency/corporate/covid-19/warning-potential-financial-fraud.html
  26. https://edmonton.ctvnews.ca/fake-covid-19-test-results-are-being-used-to-scam-albertans-1.4857399
  27. https://www.seattletimes.com/seattle-news/politics/fox-news-runs-digitally-altered-images-in-coverage-of-seattles-protests-capitol-hill-autonomous-zone
  28. https://www.bbc.com/news/52934672
  29. https://www.seattletimes.com/seattle-news/politics/fox-news-runs-digitally-altered-images-in-coverage-of-seattles-protests-capitol-hill-autonomous-zone/
  30. https://www.analyticsvidhya.com/blog/2019/12/detect-fight-neural-fake-news-nlp/?fbclid=IwAR24O2Wuw91AkXE_OowU38MqZw_cfY0NEpQmy9nz-U99Y1fl7_QIx58cVbI
  31. https://www.sciencedirect.com/science/article/pii/S1877050917323086
  32. https://www.analyticsvidhya.com/blog/2019/12/detect-fight-neural-fake-news-nlp/
  33. https://www.sciencedirect.com/science/article/pii/S1877050917323086
  34. https://www.wired.com/story/facebook-click-gap-news-feed-changes/
  35. https://gltr.io/
  36. https://www.google.com/url?sa=i&url=https%3A%2F%2Ftwitter.com%2Fdatasciencenig%2Fstatus%2F1158266253410279424&psig=AOvVaw0MOGdW4jIatjoVpi8pPpea&ust=1595696392769000&source=images&cd=vfe&ved=0CAIQjRxqFwoTCJjVxrWu5uoCFQAAAAAdAAAAABAD/
  37. https://openai.com/blog/better-language-models/
  38. https://thenewstack.io/grover-ai-detects-machine-generated-neural-fake-news-by-also-generating-it/
  39. http://news.mit.edu/2019/better-fact-checking-fake-news-1017
  40. https://nextshark.com/facial-re-enactment-software/
  41. https://www.forbes.com/sites/quickerbettertech/2020/06/07/microsoft-is-replacing-employees-with-aiand-other-small-business-tech-news/?fbclid=IwAR3VYvnPod3bBi0rjFVpoy_oh9oh-45BxeX2kdlnFhYA4Z-skP_XX4FZZsM#4e2d78e118c8
  42. http://www.graphics.stanford.edu/~niessner/papers/2016/1facetoface/thies2016face.pdf
  43. https://www.wired.com/brandlab/2018/10/lyrebird-uses-ai-find-artificial-voice/
Personal tools