Monday, December 23, 2024

Today, we’re sharing our second quarterly adversarial threat report that provides insight into the risks we see worldwide and across multiple policy violations. The report marks nearly five years since we began publicly sharing our threat research and analysis into covert influence operations that we tackle under the Coordinated Inauthentic Behavior (CIB) policy. Since 2017, we’ve expanded the areas that our threat reporting covers to include cyber espionage, mass reporting, inauthentic amplification, brigading and other malicious behaviors.

Here are the key insights in today’s Adversarial Threat Report:

Cyber espionage: Our investigations and malware analysis into advanced persistent threat (APT) groups show a notable trend in which APTs choose to rely on openly available malicious tools, including open-source malware, rather than invest in developing or buying sophisticated offensive capabilities. While some opt for more advanced malware that often incorporates exploits, we’ve seen a growing number of operations using basic low-cost tools that require less technical expertise to deploy, yet yield results for the attackers nonetheless. It democratizes access to hacking and surveillance capabilities as the barrier to entry becomes lower. It also allows these groups to hide in the “noise” and gain plausible deniability when being scrutinized by security researchers.

Emerging harms: Over the past year and a half, in response to organized groups relying on authentic accounts to break our rules or evade our detection, we’ve developed multiple policy levers to help us take action against entire networks — whether these are centralized adversarial operations or more decentralized groups — as long as they work together to systematically violate our policies. Since we began deploying these levers, we’ve enforced against networks with widely varying aims and behaviors, including groups coordinating harassment against women, decentralized movements working together to call for violence against medical professionals and government officials, an anti-immigrant group inciting hate and harassment, and a cluster of activity focused primarily on coordinating the spread of misinformation. Our report highlights our findings and takedowns in India, Greece, South Africa and Indonesia.

A deep dive into the Russia-based troll farm: We’re also sharing our threat research into a troll farm in St. Petersburg, Russia, which unsuccessfully attempted to create a perception of grassroots online support for Russia’s invasion of Ukraine by using fake accounts to post pro-Russia comments on content posted by influencers and media on Instagram, Facebook, TikTok, Twitter, YouTube, LinkedIn, VKontakte and Odnoklassniki. Our investigation linked this activity to the self-proclaimed entity CyberFront Z and individuals associated with past activity by the Internet Research Agency (IRA). While this activity was portrayed as a popular “patriotic movement” by some media entities in Russia, including those previously linked to the IRA, the available evidence suggests that they haven’t succeeded in rallying substantial authentic support.

Summary of Our Threat Disruptions

  • We took action against two cyber espionage operations in South Asia. One was linked to a group of hackers known in the security industry as Bitter APT, and the other — APT36 — to state-linked actors in Pakistan.
  • As part of disrupting new and emerging threats, we removed a brigading network in India, a mass reporting network in Indonesia and coordinated violating networks in Greece, India and South Africa.
  • Under our Inauthentic Behavior policy against artificially inflating distribution, we took down tens of thousands of accounts, Pages and Groups around the world. Our manual investigations around the Philippines election allowed us to build automated enforcement systems to defend against this sort of activity globally and at scale.
  • We also removed three networks engaged in CIB operations, including one network linked to a PR firm in Israel, and two separate troll farms — one in Malaysia targeting domestic audiences and one in Russia targeting global discourse about the war in Ukraine. We included in-depth threat research and an analysis into the Russian network at the end of our report.

We shared our latest findings with our peers at tech companies, security researchers, governments and law enforcement. We’re also alerting the people who we believe were targeted by these campaigns, when possible.

See the full Adversarial Threat Report for more information.

Source

0 Comments

Leave a Comment