Weaponising Social Media

This article we’re going to be looking more at how conspiracy theories and opportunistic misinformation is misused and amplified to break down trust in authorities and experts in order to cause harm.

Often these misinformation attacks end up focusing on particular individuals as being behind shadowy conspiracies.

The psychology behind the misinformation was mostly covered last time, so we’re going to look more at the mechanics that enable them. We’ll take a look at fake social media accounts, botnets, and automated amplification to exploit social media algorithms.

Botnets

Botnets consist of computers, or other devices (the Mirai botnet, one of the largest known for a time, ran on CCTV cameras), compromised and enrolled into a central control system. While one of the most common usages is generating traffic for Distributed Denials of Service (DDoS) attacks, combined with a bit of scripting they can be used to generate and control fake social media profiles with a much lower risk of detection. Using multiple devices in a botnet means that source IP addresses are masked, and one of the easiest ways to detect coordinated creation of fake profiles is correlating sources.

In the simplest form, this sort of generation scripts the manual steps that would be taken to generate a profile, creating a virtual browser, clicking buttons, and entering information. Fortunately, less effort is spent on these than should be, leading to some common weaknesses.

Fake Profiles

It’s the work of a few minutes to manually set up a fake social media profile, and while there have been very limited improvements to filtering systems none of them really do much to prevent it. Verification via phone number or e-mail isn’t effective, as it’s trivial to create a burner number or e-mail address for an account. One of the most effective checks used to be a reverse image search on profile pictures, but with the emergence of services like https://thispersondoesnotexist.com/ that’s no longer an option.

Of course what can be carried out manually is much faster to automate. With some effort put into scripting it’s possible to generate hundreds of fake profiles in minutes, using different source IP addresses, auto-generated e-mail addresses for verification, and some username generation. This is where one of the weaknesses comes in and you will occasionally hear, particularly on Twitter, comments that usernames followed by strings of numbers tend to be bots. The mistake that can be made is assuming this means they are entirely run through scripts and automation – they are generated by botnets and scripting, but often at least partly run manually.

Amplification

Social media application interfaces are used to monitor and reply to comments, as well as broadcasting messages. Because the troll farms (a term referring to buildings where these campaigns are coordinated) control so many accounts they also use them to rebroadcast posts which match whatever narrative they are trying to develop. This activity is combined with combing various sources for new disinformation to throw into the mix, sowing more confusion and furthering the agenda of distrust.

As you’d expect, one of the targets for this disinformation is denying the existence of troll farms, and discounting claims of their existence as conspiracy theories. The effect of this is easy to see – when there is a lot of disinformation around, throwing more into the mix just increases mistrust. If a coordinated campaign can convince people to be equally sceptical of all information, finding the accurate information becomes as much a matter of chance as anything else. The trick then becomes simply filling people’s feeds with as much disinformation as possible – if you can crowd out reliable information and overwhelm people’s ability to critically think through sources, the system breaks down.

Targeting Individuals

Where this gets dangerous to a high-profile individual is where those who are convinced by this information are easy to recruit. With the human talent for pattern recognition, these scattered opportunistic pieces of disinformation are tied together into meta-conspiracies which are then pinned on individuals. The people who are convinced by these meta-conspiracies are demonstrably dangerous. Last time I mentioned arson attacks on 5G towers, but there are plenty of documented incidents of these types of theories inspiring domestic terrorism, including bombings and shootings, in various countries.

Recent examples of these theories targeting heads of state (a reinvention of David Icke’s shape-shifting lizards controlling the world conspiracy) include Bill Gates and George Soros (microchips in vaccines), and many others. Alongside the use of a fake social media event to coordinate an armed militia of 200 people in America recently, the weaponization of social media is a threat which anyone providing protection services for a high-profile individual needs to seriously consider.

Protection

Prevention of this sort of weaponization relies on education and effective action by social media platforms. Education is not effectively scalable and cannot be comprehensively deployed, and social media companies have not shown any indication that they can address the threat effectively. Recent attempts such as Facebook’s striking off of a large number of accounts run by a Romanian troll farm are a good sign, but barely make a dent in the problem.

A good threat intelligence programme is needed to protect against this kind of threat, making sure that any such attempt to weaponise social media against an individual is detected as early as possible so precautions can be taken. Some effort can be made to counteract the claims, but denial is rarely effective against these movements. The only advantage of the rapid development and deployment of these manoeuvres is that attention spans are often short term. So while the threat can develop from nothing unexpectedly, if precautions are taken, it is also likely to also subside quickly.

Further Reading

If you are interested in these threats and would like to read more, then there are a few different sources. The Internet Research Agency is the best documented case, Active Measures by Thomas Rid is one of the better books on the subject, and it’s an area where more and more threat intelligence providers are sharing research on the subject.

Weaponising Social MediaBy James Bore

James Bore is a cyber security consultant, speaker, and author with over a decade of experience in the domain. He has worked to secure national mobile networks, financial institutions, startups, and one of the largest attractions companies in the world among others. He now provides cyber security advice, consultancy, and training as Director of Bores Consultancy Ltd. To get in touch, contact him at [email protected], and check on the Bores website at https://bores.com for details of workshops and services available.

The post Weaponising Social Media appeared first on Circuit Magazine.

Join the conversation

or to participate.