Artwork by Joaquin Labio

Dangers of Disinformation: Against False Information and Historical Distortion (Part 1)

First of four parts.

Introduction to Information Disorder

The Philippines is currently recognized as the world’s social media capital, where eighty million Filipinos use social media for four hours per day on average. (Statista, 2021). As of 2021, Facebook was the dominant platform in the country (96.2% of the country’s social media market share), followed by Facebook Messenger (94.4%), Instagram (75.7%), Tiktok (67.9%), and Twitter (59.2%) (Statista, 2022).

The rise of social media has brought about a growing wave of disinformation. The Philippines has even been referred to as a “petri dish for disinformation” with groups testing out voter manipulation tactics on Philippine audiences before employing these same techniques in the West. Similarly, an Oxford University report reports that politicians use social media manipulation, troll farms, and disinformation as part of political campaigns. While Facebook and YouTube have begun to regulate and remove content that peddles disinformation, experts at The Institute for Strategic Dialogue, argue that the platforms’ actions have not been sufficient as methods of spreading misinformation continuously evolve.

According to Rappler (2021), false information spreads undetected and unchecked. In the Philippine online landscape, Facebook, Twitter, TikTok, and YouTube are still open for abuse, amplifying false and fabricated narratives. Loopholes in Facebook policy expose Messenger users to false information. Similarly, YouTube’s ambiguous policies allow lies to thrive. Stars and influencers are paid to promote Marcos-Duterte propaganda, while vloggers legitimize biased reporting.

False Information vs “Fake News”

Why do we prefer the term “false information” over the oft-used “fake news”? According to Claire Wardle (2017), “false information” refers to disinformation across a range of topics, such as health, the environment, and economics, and a variety of platforms. In contrast, the term ‘fake news’ is more vague and ambiguous, yet is also narrowly defined as political news stories and statements. This definition is often weaponized by authorities to attack dissenting and critical parties.

Misinformation, Malinformation, and Disinformation

Figure 1. The Three Types of Information Disorder.

Misinformation: False information that is created without the intent of causing harm. Examples of misinformation include unintentional mistakes such as incorrect photo captions, dates, statistics, and translations. Other forms of misinformation include false connections, misleading content, and satire taken seriously.

Example: A misleading post, released in December 2021, claims the House of Representatives approved a bill on its third and final reading to make Reserve Officers’ Training Corps (ROTC) mandatory for Grades 11 and 12. However, upon further investigation, the House of Representatives approved said bill in May 2019, not December 2021. Facebook pages simply reused old posts from the Philippine Star and Manila Bulletin to give off the impression that mandatory ROTC became a law in December 2021.

Figure 1.1 MISLEADING: House approves mandatory ROTC bill for Grades 11 and 12 in December 2021.

Malinformation: Verifiable information that is used to cause harm to a person, organization, or country. Private information is intentionally published for personal or corporate gain rather than the public good. Deliberate alteration of the context, date, or time of genuine content.

Disinformation: False information that is purposefully created to harm a person, social group, organization, or country. Examples include manipulated or fabricated audio-visual content, impostor content, rumors, and conspiracy theories.

Example: Red-tagging is a form of disinformation, common both on-ground and on social media. The Commission on Human Rights (CHR), in accordance with Peace Observers Network (IPON), defines red-tagging as “an act of state actors, particularly law enforcement agencies, to publicly brand individuals, groups, or institutions as… affiliated to communist or leftist ‘terrorists’.” By falsely and maliciously tying government critics, activists, and organizations to “terrorist” organizations, the government stifles dissent, generates a “chilling effect”, and encourages assassinations and retaliations.

Figure 1.2 RED-TAGGING: State and/or non-state actors maliciously tag activists as affiliated with the CPP-NPA-NDF.

Types of Misinformation and Disinformation

Figure 2. Categories of Information Disorder.

Satire or parody: No intention to cause harm but has the potential to fool. (Example)

False connection: When headlines, visuals, or captions do not match nor support their accompanying content. (Example)

Misleading content: Misleading is the use of information to frame an issue or individual. (Example)

False context: When genuine content is shared with false contextual information. (Example)

Imposter content: When genuine sources are impersonated. (Example)

Manipulated content: When genuine information or imagery is manipulated to deceive. (Example)

Fabricated content: New content is 100% false, designed to deceive and do harm. (Example)

Elements of Information Disorder

Figure 3. The Three Elements of Information Disorder.

Agent: Who were the “agents” that created, produced, and distributed the message, and what was their motivation? Agents are involved in all three phases of the information chain — creation, (re)production, and distribution — and have various motivations (Claire Wardle and Hossein Derakhshan, 2017).

  1. What type of actor are they?
    Agents can be official, such as intelligence services, political parties, and news organizations. They can also be unofficial, like groups of citizens that have become evangelized about an issue.
  2. How organized are they?
    Agents can work individually, in tightly-organized organizations (e.g., PR firms or lobbying groups), or in impromptu groups organized around common interests.
  3. What are their motivations?
    Financial: To profit from information disorder through advertising; Political: To discredit a political candidate in order to influence public opinion; Social: To connect with a certain group online or off; and Psychological: To seek prestige or reinforcement.
  4. Which audiences do they intend to reach?
    Audiences can vary from an organization’s internal mailing lists or consumers, to social groups based on socioeconomic characteristics, to an entire society.
  5. Is the agent using automated technology?
    The ability to automate the creation and dissemination of messages online has become much easier and, crucially, cheaper. For example, a bot account, according to the Oxford Internet Institute, posts more than 50 times a day, on average.
  6. Do they intend to mislead?
    The agent may or may not intend to deliberately mislead the target audience.
  7. Do they intend to harm?
    The agent may or may not intend deliberately to cause harm.

Message: Messages can be communicated by agents in person (via gossip, speeches, etc.), in text (newspaper articles or pamphlets), or in audio/visual material (images, videos, motion graphics, edited audio-clip, memes, etc.). While much of the current discussion about “fake news” has focused on fabricated text articles, disinformation often appears in visual formats. The multimedia nature of disinformation poses additional challenges for fact-checking, as technologies for automated text analysis, image, and video analysis are significantly different. (Claire Wardle and Hossein Derakhshan, 2017).

  1. How durable is the message?
    Some messages are designed to stay relevant and impactful for the long term (throughout an entire war or in perpetuity). Others are designed for the short term (during an election) or for a specific moment, as in the case of an individual message during a breaking news event.
  2. How accurate is the message?
    As discussed earlier, messages can take the form of malinformation: truthful information used to harm, either through leaks or hate speech. Messages may also contain inaccurate information, ranging from false connections (a clickbait headline mismatched with its article’s content) to 100% fabricated information.
  3. Is the message legal?
    The message might be illegal, as in the cases of recognized hate speech, intellectual property violations, privacy infringements, or harassment.
  4. Is the message ‘imposter content’, i.e. posing as an official source?
    The message may use official branding (e.g., logos) unofficially, or it may steal the name or image of an individual (e.g., a well-known journalist) in order to appear credible.
  5. What is the message’s intended target?
    The agent’s audience (the group they want to influence or manipulate), is different from an agent’s target (those who are being discredited). The target can be an individual (a candidate or a political or business leader), an organization (a private firm or a government agency), a social group (a race, ethnicity, the elite, etc.), or an entire society.

Interpreter: When the message was received by someone, how did they interpret the message? What action, if any, did they take? Audiences are very rarely passive recipients of the information. An ‘audience’ is made up of many individuals, each of whom interprets information according to his or her own socio-cultural status, political positions, and personal experiences (Claire Wardle and Hossein Derakhshan, 2017).

In an era of social media, where everyone is a potential publisher, the interpreter can become the next ‘agent,’ deciding how to share and frame the message for their own networks. Will they show support for the message through likes, comments, and shares? If they do share the message, have they done so with the same intent as the original agent or will they share it to show their disagreement?

Phases of Information Disorder

Figure 4. The Three Phases of Information Disorder.

Creation: When the message is created.

(Re) Production: When the message is turned into a media product.

Distribution: When the product is distributed or made public.

It is important to consider the different phases of information disorder alongside its elements because the agent that creates the content is often fundamentally different from the agent that produces it. For example, the motivations of the mastermind who “creates” a state-sponsored disinformation campaign are very different from those of the low-paid “trolls” tasked with turning the campaign’s themes into specific posts. Once the message has been distributed, it can be reproduced and redistributed endlessly by many different agents, all with different motivations. For example, if several communities distribute a social media post, it can be picked up and reproduced by the mainstream media and further distributed to still other communities. Only by dissecting information disorder in this manner can we begin to understand these nuances. (Claire Wardle and Hossein Derakhshan, 2017).

The role of the mainstream media as agents in amplifying, intentionally or not, fabricated or misleading content is crucial to understanding information disorder. While fact-checking has always been fundamental to quality journalism, techniques used by hoaxers and those attempting to disseminate disinformation have become increasingly sophisticated. With newsrooms relying on the social web for story ideas and content, forensic verification skills and the ability to identify networks of fabricated news websites and bots is more important than ever before. (Claire Wardle and Hossein Derakhshan, 2017).

Meanwhile, fact-checkers, journalists, lawmakers, educators, civil society organizations, and concerned citizens have been urging social media platforms to accept responsibility for the misinformation that circulates on their platforms (Rappler, 2019). Massive amounts of propaganda and targeted disinformation produced and amplified by a vast propaganda network of websites, Facebook pages and groups, YouTube channels, and social media influencers appear to be part of a systematic campaign to burnish the image and mislead the public, paving the way for their continued rise in Philippine politics.

Written by: Kent Benedict Balon

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Institute for Nationalist Studies

The Institute advances ideas and information campaigns on social issues to ferment a nationalist consciousness for the interest of the people’s welfare