A recent report by Europol, Trend Micro, and the United Nations Interregional Crime and Justice Research Institute explores the Malicious Uses and Abuses of Artificial Intelligence.
In addition to the current applications of Artificial Intelligence (AI) to aid the commission of crimes, the report explores plausible future scenarios in which criminals might abuse AI technologies to facilitate their activities. This article explores how criminal abuse of AI may evolve over time, as identified in the report.
Social engineering at scale
An innovative scammer could introduce AI systems which would allow them to focus only on those potential victims who are easy to deceive. Machine Learning (ML) algorithms can be used to anticipate a target’s replies and then respond accordingly to lend credibility to their story.
Content generation
AI chatbots like ChatGPT can be used to generate content that may be used in disinformation campaigns. AI can also be used to learn which kinds of content work best and are the most widely shared.
Criminals can also use ML to generate and distribute content for phishing and spam email campaigns in a variety of languages, automating and amplifying the scope and scale of malware distribution worldwide.
Text content synthesis can also be employed to generate text that imitates the writing style of an individual. This could be used, for example, to imitate the writing style of a company’s CEO to trick target recipients inside the company into complying with any of their fraudulent requests.
Content parsing
Malicious actors have been working at developing document-scraping malware that, once installed on a target machine, can look for specific bits of information, such as all personal employee data on the server of a human resources department.
So far, there are still limitations in the malicious use of document-scraping malware, however in the future, it is likely that more sophisticated scraping malware will be able to better identify relevant content and even perform targeted searches. For example, malicious actors might be able to evolve from scraping a company server for “everything that resembles a phone number” to scraping “all the emergency contact phone numbers for all the top tier managers.”
Improved social profile ageing
Some organisations have implemented AI-based security systems designed to detect unusual user behaviour, however criminals may use AI to emulate “normal” behaviour and evade detection by these systems.
These techniques can also be used to create false behaviour in stolen social media accounts to avoid their closure. This helps criminals to monetize these accounts, for example by selling likes or followers, for longer.
Enhanced phishing
Criminals could use ML to analyse the performance of phishing campaigns and remove email databases that are unlikely to deliver phishing emails, craft new emails that are more likely to succeed, and send them to those addresses that belong to more susceptible people.
Robocalling v2.0
Robocalling has become a way of performing a phishing scam through a regular telephone. In this kind of scam, an automated caller delivers the voice phishing message and tries to entice the victim to visit a malicious website. By adding smart automation to such a scamming system, the perpetrators can monitor whether the scam is successful or not and what kind of arguments and logic are the most convincing to potential victims. Over time, scammers can use the data obtained to train progressively better ML models to amplify their attacks.
Another intriguing possibility for robocaller improvement might involve the use of audio deepfakes to fool the user into thinking that they are dealing with a person whom they know.
Key takeaways
While criminals are already abusing AI to facilitate their activities, the report identifies plausible future scenarios in which criminals may become even more sophisticated in their abuse of AI. This includes conducting social engineering at scale, using advanced content generation and content parsing to improve the success of malicious campaigns, better “ageing” social profiles, as well as enhancing phishing and robocalling campaigns.