Infodemic: The dissemination of misinformation online

Infodemic: The dissemination of misinformation online

Robert Hogge

This research considers how both automated systems and Americans themselves disseminate false information, relevant to political America, online through social media. The project is based off a 2018 MIT investigation that found misinformation is spread much faster and much further than factual information.

The process of the online distribution of false information keeps shifting in reaction to the nature of events that become prevalent subjects of misinformation. These seismic shifts include major political events, particularly the 2016 and 2020 US presidential elections and the subsequent 2021 Capitol insurrection. As mainstream social media sites have sought to mute or label misinformation, the users spreading it have been migrating to less mainstream websites that are less prone to censorship. As an example of these shifts, amongst conservatives, former president Donald Trump’s dismissal from social media has sped up this process. Many users disseminate misinformation much faster and more frequently on these more niche sites. Simultaneously however, they share their most popular content on Facebook, Instagram and Twitter to reach more people.

However, misinformation around COVID-19 is ubiquitous online. There is profuse evidence that this infodemic is having sobering consequences, and therefore this research has never been more pertinent.

There are various barriers regarding gathering data for this research. Data scientists within Facebook, which also owns Instagram and WhatsApp, are unable to answer many of these questions themselves because management within Facebook will not provide them access to the necessary data and resources. Moreover, the company has disabled certain accounts from external researchers that they have caught performing this analysis to ‘protect people’s privacy’. Since this project was completed, the media has increasingly focused on Facebook’s concealed approach to internal and external research. Most recently, after significant pressure, Facebook did release files on the mental health impacts of Instagram. Twitter does release large data sets to academics, which most existing literature is based on, such as the MIT paper that this project is based off. However, it is limited in certain areas and is unlikely to be representative of all social media sites.

Therefore, I instead followed various trails of individual stories, the users disseminating them and the platforms on which they are being spread. I discovered how stories have been distributed through searching backwards towards where they first publicly appear on the internet. I subsequently compared the characteristics of the accounts of misinformation spreaders across different platforms. This was combined with existing literature research.

I needed to ensure that my research was not influenced by selection bias. The aforementioned MIT investigation was based off stories from six fact checking organisations. To ensure that their research was not influenced by selection bias, the MIT researchers independently verified that, from randomly selected stories outside of these six fact checking organisations, their various findings were nearly identical. Therefore, given that the stories from these six organisations were representative of wider findings, my research randomly selected a sample of them using an online generator. However, separately to this, I focused heavily on a handful of stories that were either just emerging or had recently gone viral. Therefore, I had to select these stories purposefully.

I also considered the need to ensure that my conclusions about Americans and automated robot spread of misinformation were not influenced by each other. The two account types were separated by my understanding and detection of robot-like characteristics on any given account. Subsequently, I went onto certify this using pre-programmed algorithms that detected robot-like activity.

Online falsity ranges in severity. As an example of a minor inaccuracy, Alexandria Ocasio-Cortez tweeted that Bernie Sanders has never taken corporate lobbyist money. However, Bernie Sanders has taken lobbyist money, in small amounts. Responses from unverified users to falsity made by a Member of Congress are more likely to not challenge any particular claim if they share the same opinion that the post implies. Respondents are also more likely to challenge it more accurately, the more overtly false any particular claim is. This is in contrast to outright conspiracy theories. The accounts that disseminate this sort of material are usually focused on two or three topics at one time. Currently, in almost all cases one topic is COVID-19 related. Other topics vary in extremity, including claims around election fraud, antisemitic conspiracy theories in addition to various politicians and scientists having body doubles, or are secretly transgender or lizards. Fact checkers find posts featuring minor inaccuracies mostly from more credible sources, whereas the conspiracy theories, which particularly around COVID-19 originate from mostly far less credible places, yet are nonetheless widespread and accepted. Falsity predominantly originates from social media users themselves disseminating content on both mainstream and less credible niche sites that are less censored to varying degrees.

Any story is far less likely to go viral until a few news websites have published articles about it or there is a video of what true and false misinformation spreaders are claiming or discussing. This therefore gives people the necessary evidence to spread through posting on multimedia platforms. A few people may post about it before this occurs but they do not have a source to corroborate their claims. This is true even for users of far right and neo-nazi websites and applications, who have been using false stories from comparatively moderate websites including The Wall Street Journal and Rumble.

Both true and false information spreads effectively with a strong and persuasive message, a link to an article with a punchy headline or a powerful and concise image or video. Furthermore, if falsity is communicated through an angry tone, it is more likely to go viral. Misinformation is far more likely to go viral once a public figure is perpetuating it, whether it be an account or an article or video of them espousing a convincing message. If an influential figure spreads misinformation, it is also more likely to be convincing for social media users who do not regularly regurgitate falsity. This emphasises that traditional media platforms including newspapers, TV and radio are still pivotal in the spread of misinformation.

Originally, this project intended to follow former President Trump’s social media posts. Subsequently, he was forcibly removed from social media due to, in Twitter’s words, ‘risk of further incitement of violence’. This shifted the focus onto how the narrative that threw Trump off Twitter, Facebook and other sites, was continued without him. His message was carried by other social media personalities in addition to attempts to repost screenshots of his short lived blog website.

Robots or ‘bots’ are computer programmed accounts usually targeted towards amplifying the statistics of a particular account or posts to portray inauthentic popularity. These accounts or posts are likely to be pushing a certain narrative. Bots target specific keywords, accounts or posts. They target influential users in the early spread of viral stories. The majority of bots are now proliferating true and false COVID-19 information. If successful, keywords will subsequently ‘trend’ and Twitter will promote this on its platform, which they would otherwise charge companies for. Social media websites are improving at removing these bots. However, there is a large category of bots who operate in their own echo chamber, merely following each other and otherwise staying fairly dormant. It is often unclear what these bots are intended for until they do something more significant. Moreover, bots are becoming more sophisticated, putting more effort into mimicking human behaviour for added false credibility.

The 2018 MIT paper that this research is based on states that bots amplify both true and false information roughly equally. This benefits their research on the human spread of false news, as bots therefore do not skew their findings significantly. Often the primary objective behind the spread of information by states, organisations, citizens or bots is to push a particular narrative. In this case, the degree to which they are conscious of or even care whether the information is true varies widely.

Social media companies are primarily advertising companies. They can profit from pretending not to notice misinformation spreaders and bots crawling over their platforms. Despite this, likely for accountability reasons, a few major sites have taken action. Misinformation reported by fact checking websites is hardly ever taken down or tagged by Twitter, always tagged on Instagram and either taken down or tagged by Facebook. Accounts spreading falsity tend to have far more followers, and posts with more interactions, on Facebook than Twitter. This reflects the perception that Twitter is more of an elite platform for political and other hobbyists whilst Facebook and increasingly Instagram are stronger mechanisms for influencing the broader public. My research also corroborated existing research on social media companies failings to take down the majority of reported antisemitic accounts and posts. Twitter has acknowledged that one Tweet violates their rules. They have not yet responded to the others, despite having been reported around the same time. I have therefore subsequently asked them why racist Tweets with false information, that violates their policies, have not been removed. I have also asked Facebook why they are also not taking down the majority of reported antisemitic posts. Neither companies have responded yet.

In terms of further research, much of the existing literature focuses on Twitter using large quantitative analysis because it is the largest platform to provide academics enough data to perform this type of research. It would therefore be beneficial to do this more targeted analysis following specific accounts and stories across all relevant platforms. Furthermore, interviews with those spreading false information online would give a deeper insight into how they choose to find and spread this content, and provide an opportunity to gain an insight into why they do so.

I strongly enjoyed this UROP project and would recommend any future students to apply for one.