In the ever-evolving landscape of technology, Artificial Intelligence (AI) offers unparalleled applications for efficiency and autonomy. However, beneath the surface of progress lies a darker reality – the potential for malicious exploitation. As AI empowers industries and individuals, it concurrently opens doors for nefarious schemes.
A recent report explores the Malicious Uses and Abuses of Artificial Intelligence. The findings in the report are based on a workshop in March 2020 that was organised by Europol, Trend Micro, and the United Nations Interregional Crime and Justice Research Institute.
This article explores key current applications of AI by criminals to enhance malware, abuse Smart Assistants, guess passwords, break CAPTCHAs, and aid encryption.
Exploration into leveraging AI to enhance the efficiency of malware is in its early stages, yet evidence suggests that criminals are actively advancing its application. AI facilitates the creation of malware designed to elude detection by machine learning-based antivirus systems. Moreover, it aids in pinpointing potential targets for attacks.
Additionally, AI introduces novel approaches to traditional hacking methods, rendering them less predictable and more challenging for humans to predict.
Abusing Smart Assistants
Exploiting the prevalence of AI assistants in households is a potential vulnerability. For instance, attackers may target exposed smart speakers to issue audio commands to nearby AI assistants like Amazon Alexa or Google Home. This method of hijacking a smart assistant through vulnerable audio devices may be used as a tool for malicious actors aiming to breach a smart home’s security. Furthermore, a stealth attack might involve issuing commands that are imperceptible to the human ear.
AI-supported password guessing
Machine learning can be used to improve password-guessing algorithms, leading to more targeted and more effective password guesses.
Machine learning is a subset of AI, where algorithms are trained to infer certain patterns based on a set of data to determine the actions needed to achieve a given goal.
AI-supported CAPTCHA breaking
Cybercriminals are developing systems that leverage machine learning to try to break CAPTCHA images to automate the abuse of web services.
Software that implements neural networks to solve CAPTCHAs, such as XEvil 4.0, is being advertised on Russian underground forums and rented out to users for 4,000 rubles weekly (approximately US$54 as of writing).
AI applications related to improving or breaking encryption are still in their infancy. However, an experiment conducted by Google in 2016 demonstrates strong proof of the potential for AI to assist decryption, which could be abused by cybercriminals.
The recent report on the malicious uses of AI highlights some concerning trends. From the early stages of AI-powered malware to exploiting vulnerabilities in smart assistants, and advancements in password-guessing algorithms, CAPTCHA-breaking systems, and AI-aided encryption and decryption, the potential for malicious exploitation of AI and machine learning is evident.