Generative AI threat actors

In the ever-evolving landscape of cybersecurity, threat actors are constantly on the lookout for new tools and technologies to advance their malicious endeavours.

Over the past few years, Mandiant, a US cybersecurity firm and a subsidiary of Google, has been closely monitoring the interest and use of AI capabilities among these adversaries. While the potential for generative AI in cyber threats is substantial, practical implementation remains limited.

In this article, we’ll delve into the various aspects of threat actors’ engagement with generative AI, including its applications in information operations, social engineering, and malware development, as outlined by Mandiant.

 

Generative AI in intrusion operations

The adoption of AI in intrusion operations has been relatively slow. Threat actors have primarily embraced AI in the context of social engineering, using it to deceive and manipulate their targets. However, the adoption of AI in this realm remains constrained.

 

Generative AI in information operations

Generative AI holds tremendous potential to amplify the capabilities of information operations actors in two key ways: scalability and the creation of realistic fabricated content.

 

Scaling content production

Generative AI empowers information operations actors with limited resources to generate high-quality content at scale. This includes crafting articles, political cartoons, and benign filler content tailored to specific narratives. Conversational AI chatbots driven by large language models (LLMs) can also bridge linguistic barriers when targeting foreign audiences.

 

Enhancing persuasion with hyper-realistic content

Hyper-realistic AI-generated content can exert a stronger persuasive impact on target audiences. Threat actors have already experimented with fabricating content, such as audio recordings, using AI models trained on real individuals’ voices. This technology can be employed for nefarious purposes, including disseminating inflammatory content and fake public service announcements.

 

Information operations actors’ adoption of Generative AI technologies

The adoption of generative AI by information operations actors varies across different media forms, influenced by tool availability and the effectiveness of each medium including images, video, text, and audio. Mandiant assesses that AI-generated images and videos are most likely to be employed in the near term.

 

AI-generated imagery

Generative adversarial networks (GANs) and generative text-to-image models have been employed to produce realistic images.

Publicly available GAN-generated image tools have been frequently used in information operations, often for creating profile photos for inauthentic personas, including by actors aligned with nation-states including Russia, the People’s Republic of China (PRC), Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador.

Text-to-image models, which create customised images based on text prompts, are expected to gain traction as they offer more versatility and are harder to detect than GANs.

 

AI-generated and manipulated video

Threat actors have embraced AI-generated and manipulated videos for various purposes. Capabilities of these technologies include customisable AI-generated human avatars for news presentations and face swap tools for inserting individuals into existing videos. These tools have been employed to propagate narratives and create convincing deepfake videos.

For example, in March 2022, following the Russian invasion of Ukraine, an information operation promoted a fabricated message alleging Ukraine’s capitulation to Russia through various means, including via a deepfake video of Ukrainian President Volodymyr Zelensky.

 

AI-generated text

While AI-generated text has seen limited use in information operations, the recent availability and ease of use of AI-generated text tools such as ChatGPT are likely to lead to their widespread adoption. These tools can generate text content tailored to specific narratives, making them valuable assets for information operations.

 

AI-generated audio

While AI-generated audio has seen limited use in information operations, there is significant potential for misuse. Text-to-voice models and voice cloning technology can be harnessed to create convincing audio content for social engineering campaigns and deception.

 

Improving social engineering

Generative AI can enhance threat actors’ social engineering efforts by improving reconnaissance, creating lure material, and enabling more effective communication.

 

Reconnaissance

Machine learning and data science tools powered by AI can process massive amounts of stolen and open-source data swiftly and efficiently. These tools aid espionage actors in identifying patterns, recruiting foreign individuals for intelligence, and crafting effective social engineering campaigns.

 

Lure material

Generative AI helps create more compelling and authentic-looking content, increasing the likelihood of successful compromises in phishing campaigns. This technology enhances the complexity of language used in operations, making them more convincing.

 

Developing malware

Threat actors are expected to increasingly leverage generative AI in the development of malware. AI can assist in writing new malware and improving existing ones, lowering the barrier for less technically proficient attackers.

 

Key takeaways

While generative AI holds significant potential for threat actors, its widespread adoption is still limited. However, as awareness and capabilities surrounding AI technologies develop, malicious actors are likely to leverage them more effectively. Users and enterprises must remain vigilant.

Nyman Gibson Miralis provides expert advice and representation in cases of alleged cybercrimes.

Contact us if you require assistance.