Understanding 10 types of ‘fake news’

The popularisation of the term ‘fake news’ is often incorrectly used to describe various forms of troublesome content published on modern media networks. However, generalising all deceptive content as ‘fake news’ is problematic because by not properly defining the context of the content, it’s challenging to respond appropriately.  Instead, the importance lies in the ability to appropriately identify and understand the nature of content. In a modern world where content is created and shared instantaneously, the ability to recognise and prioritise posts posing a risk to reputation and audience awareness, is the key challenge for communications and business continuity professionals alike.

There are ten types of ‘fake news’ – one of which is actually called fake news. Each one of the ten forms of deceptive or illusional content carries a varying level of threat, influence and intent. The focus needs to be on identifying the types of content which are malicious in nature and present a high-risk threat of causing panic and confusion.

1. FAKE NEWS

Average level of risk = high

The deliberate publishing of untrue information and disguising it as news. Purely created to misinform audiences, actual fake news is completely false and the intention was never to report genuine facts.

Source: endingthefed

2. MANIPULATION

Average level of risk = high

The deliberate altering of content to change the meaning. Includes reporting quotes out of context, along with cropping an image to not accurately resemble the true story.

Source: Twitter

3. DEEPFAKE

Average level of risk = high

The use of digital technology to replicate the live facial movements and voice of another person in a video. The actor in the video becomes a ‘human green screen’, enabling the final video to be a realistic impersonation of a high-profile person. Viral cases of deepfake videos include falsified clips of Barack Obama and Mark Zuckerberg.

Source: Youtube

4. SOCKPUPPETS

Average level of risk = high

The creation of multiple social media personalities of opposing views. The intention is to cause a deliberate clash between two (or more) parties. For example; deliberately creating two fake events, both pitched at supporters of opposing political parties, to be held at the same place, time and day.

Source: texastribune

5. PHISHING

Average level of risk = high

Schemes aimed at unlawfully obtaining personal information from online users. Malicious web links are sent to users via text, email or other online messaging platforms, resembling a genuine message from a real person or company. The personal data entered after clicking through to the malicious web link is then harvested by cyber criminals.

Source: Lucinda Fraser, Twitter

6. MISINFORMATION

Average level of risk = medium

Typically, a combination of accurate and incorrect content. Common examples of misinformation include misleading headlines, and using ill-informed and unverified sources to support a story.

Source: Brighton & Hove City Council

7. RUMOUR

Average level of risk = medium

Information shared without verification. Often occurs shortly after an incident (e.g. natural disaster or terrorist attack) when little information is known for certain.

Source: Twitter

8. CLICK BAIT

Average level of risk = medium

Sensationalised headlines aimed at attracting attention for readership. Each time the article is read, the author owner of the advertisement receives a payment (also referred to as ‘pay-per-click’ advertising).

Source: Evening Standard

9. SATIRE & PARODY

Average level of risk = low

Content created for comic and entertainment purposes. Examples include online profiles that mimic an official account or person, and articles featuring dark humour.

Source: Betoota Advocate

10. BOT

Average level of risk = low

Profiles online that are not operated by humans, nor represent real users. Profiles are generated and run by automated technology systems (sometimes referred to as click farms or bot warehouses) often contributing to online discussions en masse. Social networks regularly remove profiles that do not appear to represent real users. However, because the profile creation process is automated, the removal rate often cannot compete with the new bot profiles constantly being added to the network.

Source: Medium

Taking the time to properly understand the level of risk associated with troublesome content is an imperative investment in resilient corporate communications. Developing a knowledge of all types of content will save an organisation time, money and stress in the heat of a response. Risk, priorities and threats will differ from organisation to organisation, but understanding and identifying the nature of content correctly will enable an effective response.

References:
Beyond Fake News, EAVI, 2017

Feature image by Wokandapix via Pixabay

 

and let us know what you think

Find out more: see our products, get a quote

Email us now to set up a demonstration and discuss how we could support your training:

We will review and respond to your request by email. See our Privacy Policy for how we manage your details.
This field is for validation purposes and should be left unchanged.