U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation

The U.S. Department of Justice (DoJ) announced the seizure of two internet domains and the search of nearly 1,000 social media accounts that Russian threat actors allegedly used to covertly spread pro-Kremlin disinformation both domestically and internationally.

“The social media bot farm utilized AI to create fictitious social media profiles—often posing as U.S. citizens—which were then used to promote messages supporting Russian government objectives,” the DoJ stated.

This bot network, consisting of 968 accounts on X, is believed to be part of a complex scheme devised by an employee of the Russian state-owned media outlet RT (formerly Russia Today), sponsored by the Kremlin, and aided by an officer of Russia’s Federal Security Service (FSB), who established and led an unnamed private intelligence organization.

The development of the bot farm began in April 2022, with the individuals acquiring online infrastructure while concealing their identities and locations. According to the DoJ, the organization aimed to advance Russian interests by spreading disinformation through fictitious online personas representing various nationalities.

These fake social media accounts were registered using private email servers that relied on two domains—mlrtr[.]com and otanmail[.]com—purchased from the domain registrar Namecheap. X has since suspended the bot accounts for violating its terms of service.

The disinformation campaign targeted the U.S., Poland, Germany, the Netherlands, Spain, Ukraine, and Israel, using an AI-powered software package called Meliorator, which facilitated the large-scale creation and operation of the social media bot farm.

“Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel,” law enforcement agencies from Canada, the Netherlands, and the U.S. reported.

Meliorator includes an administrator panel called Brigadir and a backend tool called Taras, which is used to control the realistic-appearing accounts, whose profile pictures and biographical information were generated using an open-source program called Faker.

Each of these accounts had a distinct identity or “soul” based on one of three bot archetypes: those that propagate political ideologies favorable to the Russian government, those that share messaging already circulated by other bots, and those that spread disinformation shared by both bot and non-bot accounts.

While the software package was only identified on X, further analysis revealed the threat actors’ intentions to expand its functionality to cover other social media platforms.

Additionally, the system bypassed X’s safeguards for verifying the authenticity of users by automatically copying one-time passcodes sent to the registered email addresses and assigning proxy IP addresses to AI-generated personas based on their assumed location.

“Bot persona accounts make obvious attempts to avoid bans for terms of service violations and avoid being noticed as bots by blending into the larger social media environment,” the agencies noted. “Much like authentic accounts, these bots follow genuine accounts reflective of their political leanings and interests listed in their biography.”

“Farming is a beloved pastime for millions of Russians,” RT was quoted as telling Bloomberg in response to the allegations, without directly denying them.

This marks the first time the U.S. has publicly accused a foreign government of using AI in a foreign influence operation. No criminal charges have been announced, but the investigation remains ongoing.

Doppelganger Lives On
Recently, Google, Meta, and OpenAI have warned that Russian disinformation operations, including those orchestrated by a network called Doppelganger, have repeatedly used their platforms to spread pro-Russian propaganda.

“The campaign is still active, along with the network and server infrastructure responsible for the content distribution,” Qurium and EU DisinfoLab stated in a report published Thursday.

“Astonishingly, Doppelganger does not operate from a hidden data center in a Vladivostok Fortress or a remote military Bat cave but from newly created Russian providers operating inside the largest data centers in Europe. Doppelganger works closely with cybercriminal activities and affiliate advertisement networks.”

At the core of the operation is a network of bulletproof hosting providers, including Aeza, Evil Empire, GIR, and TNSECURITY, which have also hosted command-and-control domains for various malware families like Stealc, Amadey, Agent Tesla, Glupteba, Raccoon Stealer, RisePro, RedLine Stealer, RevengeRAT, Lumma, Meduza, and Mystic.

Furthermore, NewsGuard recently discovered that popular AI chatbots are prone to repeating “fabricated narratives from state-affiliated sites masquerading as local news outlets in one third of their responses.”

Influence Operations from Iran and China
Additionally, the U.S. Office of the Director of National Intelligence (ODNI) reported that Iran is “becoming increasingly aggressive in their foreign influence efforts, aiming to sow discord and undermine confidence in our democratic institutions.”

The agency also noted that Iranian actors continue to refine their cyber and influence activities, using social media platforms and issuing threats, and are amplifying pro-Gaza protests in the U.S. by posing as activists online.

Google, for its part, reported blocking over 10,000 instances of Dragon Bridge (aka Spamouflage Dragon) activity in the first quarter of 2024. This spammy-yet-persistent influence network linked to China promoted narratives portraying the U.S. negatively and content related to the elections in Taiwan and the Israel-Hamas war targeting Chinese speakers.

In comparison, the tech giant disrupted no less than 50,000 such instances in 2022 and 65,000 more in 2023. In total, it has prevented over 175,000 instances to date during the network’s lifetime.

“Despite their continued profuse content production and the scale of their operations, DRAGONBRIDGE achieves practically no organic engagement from real viewers,” Threat Analysis Group (TAG) researcher Zak Butler said. “In cases where DRAGONBRIDGE content did receive engagement, it was almost entirely inauthentic, coming from other DRAGONBRIDGE accounts and not from authentic users.”

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch