Microsoft and OpenAI Warn: Nation-States Employ AI as Weapon in Cyberattacks

Advanced persistent threats (APTs) associated with China, Iran, North Korea, and Russia are leveraging large language models (LLMs) to bolster their activities.

Recent articles from OpenAI and Microsoft disclose that five significant threat actors have utilized OpenAI software for various illicit activities such as research and fraud. Following their identification, OpenAI promptly closed all associated accounts.

While the notion of AI-boosted cyber operations by nation-states may appear alarming initially, there is a silver lining: none of the observed abuses of LLMs have resulted in notably catastrophic outcomes thus far.

“Microsoft highlighted in its report that the current utilization of LLM technology by threat actors reflects behaviors akin to incorporating AI as another tool for productivity,” stated the tech giant. “Both Microsoft and OpenAI have yet to observe notably innovative or distinctive AI-enabled attack or abuse strategies resulting from the use of AI by threat actors.”

The language support provided by LLMs “is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships,” Microsoft noted.

The primary uses of OpenAI services by threat actors include:

  • Querying open-source information
  • Translation
  • Finding coding errors
  • Running basic coding tasks

LLMs also offer assistance for common tasks performed during cyber campaigns. These include reconnaissance, such as learning about potential victims’ industries, locations, and relationships.

Five Nation-State Threat Actors Weaponizing AI

The nation-state APTs currently leveraging OpenAI are among the most notorious globally.

Forest Blizzard – Russia

One such group tracked by Microsoft, referred to as Forest Blizzard but more commonly known as Fancy Bear, affiliated with the Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU), has been employing LLMs for fundamental scripting tasks such as file manipulation, data selection, multiprocessing, as well as for intelligence gathering activities including researching satellite communication protocols and radar imaging technologies, potentially relating to the ongoing conflict in Ukraine.

Two Chinese state-sponsored actors have recently been utilizing ChatGPT: Charcoal Typhoon (also known as Aquatic Panda, ControlX, RedHotel, BRONZE UNIVERSITY) and Salmon Typhoon (also known as APT4, Maverick Panda).

Charcoal Typhoon – China

Charcoal Typhoon has effectively utilized AI for both pre-compromise activities, such as gathering intelligence on specific technologies, platforms, and vulnerabilities, as well as post-compromise actions, including executing advanced commands, attaining deeper system access, and establishing control over systems.

Salmon Typhoon – China

Salmon Typhoon has predominantly employed LLMs as an intelligence-gathering tool, extracting publicly available information concerning high-profile individuals, intelligence agencies, internal and international politics, among other topics. Additionally, it has made unsuccessful attempts to leverage OpenAI for assistance in developing malicious code and researching stealth tactics.

Crimson Sandstorm – Iran

Iran’s Crimson Sandstorm (also known as Tortoiseshell, Imperial Kitten, Yellow Liderc) has utilized OpenAI to craft phishing materials—such as deceptive emails posing as communications from international development agencies or feminist groups—as well as to generate code snippets aiding in web scraping and executing tasks upon user sign-in to applications.

Emerald Sleet – North Korea

Lastly, Kim Jong-Un’s Emerald Sleet (also known as Kimsuky, Velvet Chollima) has followed suit with other APTs, employing OpenAI for basic scripting tasks, generating phishing content, researching publicly available information on vulnerabilities, as well as experts, think tanks, and government organizations involved in defense matters and its nuclear weapons program.

AI Isn’t Game-Changing (Yet)

While these various malicious applications of AI may seem practical but not quite at the level of science fiction coolness, there’s a rationale behind it.

Joseph Thacker, principal AI engineer and security researcher at AppOmni, explains, “Threat actors proficient enough to be monitored by Microsoft are likely already adept at software development. Generative AI is remarkable, but its primary role is enhancing human efficiency rather than pioneering breakthroughs. These threat actors are likely using LLMs to expedite the process of code writing, such as malware, but the impact isn’t significantly noticeable because they already possess malware. They still possess malware. It’s plausible they’ve become more efficient, but fundamentally, they aren’t introducing anything groundbreaking yet.”

Despite being cautious not to overemphasize its impact, Thacker warns that AI still provides advantages for attackers. “Malicious actors will probably be able to deploy malware on a larger scale or on systems they previously couldn’t target. LLMs excel at translating code from one language or architecture to another. Consequently, they might convert their malicious code into new languages they weren’t proficient in previously,” he explains.

Furthermore, “if a threat actor identifies a novel application, it could still be in stealth mode and remain undetected by these companies, hence not impossible. I have encountered fully autonomous AI agents capable of ‘hacking’ and discovering genuine vulnerabilities. If any malicious actors have developed something similar, it would pose a significant threat.”

For these reasons, he advises, “Companies must stay vigilant and maintain basic security practices.”