Meta’s quarterly Adversarial Threat Report paints a somewhat depressing picture of the once-dreaded global troll ecosystem: a series of “relatively low sophistication” organizations trying unsuccessfully to adapt to relevance with spam. But just because they’re bad at their job doesn’t mean we can let go of our vigilance.
The report describes various forms of hacking and manipulation attempts on the Internet, but it is sad reading. A handful of people in Greece, Pakistan, or Russia in a ramshackle office, working in a 9-5, being smothered by automated systems before they can do any serious damage.
The common theme of most threats is impersonation, where malicious actors create fake accounts of real people or create original accounts using things like AI-assisted content generation. Using networks of these accounts, often impersonating attractive young women, they contact people around the world and try to trick them into following links to malware or fake apps and services.
Needless to say, don’t trust any handsome stranger you meet online — or anywhere. But the tools they employ are often not state-of-the-art, noted Meta’s security writers:
This threat actor is a good example of a global trend where less experienced groups are choosing to rely on openly available malicious tools rather than investing in developing or purchasing sophisticated offensive features.
There were also some groups that ran farms with a few hundred to a few thousand accounts that engaged in bulk reporting and brigading of content on Instagram, Facebook and other social media. These groups are usually ideologically motivated and target different ethnic groups, religious groups and political opponents. Some Greek extremists went too far (as extremists tend to do – it’s right in the name) and ended up in a petard-hebe situation:
According to public reporting, people linked to this activity have been linked to the kidnapping of a school principal to enforce COVID-19 controls. They took him to the police to report him for violating the constitution, which led to the arrest of the kidnappers.
A good reminder that online harassment often spills over into the real world. Being attacked by an angry internet mob is increasingly a threat to one’s security.
The longest chunk of the meta-report goes into detail about “Cyber Front Z,” a Russian troll farm first reported by journalists in the country. They tried to put together an astroturfing campaign around the Russian invasion of Ukraine, but as the report states, “This fraudulent operation was clumsy and largely ineffective.”
There were around a thousand accounts with around 50,000 followers and twice that many on a Telegram channel. Basically, the plan was to demand real engagement from the followers — “Let’s shout down that activist” kind of stuff — and then create engagement with fake accounts to make it appear like a real grassroots effort was taking place.
Unfortunately, the activity was quickly spotted and shut down wherever possible. They didn’t seem to be making a lot of effort not to come off as rioters, sometimes posting opposing points of view in English and Russian within minutes. As with other farms, activity patterns suggested that those paid to post on behalf of the organization were likely doing so only as a side hustle. (This also helps explain the improper methodology.)
All of these networks posted on a fixed schedule with a clear workday pattern, seven days a week, with a slow start in the morning and a surge towards the end of the day — possibly because operators were rushing to meet their posting quotas.
While this all sounds pretty harmless, even a bit pathetic, remember that these operations are the background noise of the security world, just as there are always some real scams and scams in every city. That they can be easily detected and shut down is good, but sophisticated groups are working on much more damaging things like large-scale breaches and more successful manipulation of public perception. We see that often enough on the home front.