AI-enabled future crime

The Dawes Centre for Future Crime at UCL has undertaken a groundbreaking study that delves into the dark underbelly of AI technology, revealing the potential for AI-enabled future crimes. While AI has shown immense promise in various aspects of our lives, it also opens Pandora’s box, offering criminals new tools and opportunities. This study identifies 20 distinct AI-enabled future crimes, categorising them by their level of concern, harm, criminal profit, achievability, and difficulty of defeat.

 

Using AI for criminal purposes

AI can be exploited for criminal purposes in multiple ways, which are not mutually exclusive:

  • As a tool for crime, where AI is used to undertake a traditional crime, such as theft, intimidation or terror.
  • As a target for criminal activity, where AI systems are targeted by criminals – such as attempts to bypass protective AI systems or to make systems fail or behave erratically.
  • As a context for crime, where fraudulent activities might depend on the victim believing that some AI functionality (such as predicting stock markets or manipulating voters) is possible even if it is not.

 

Future crimes involving AI

The study identified 20 types of AI-enabled future crime. These were categorised as being of high, medium, or low concern. Further, the crimes were ranked according to four different dimensions: harm, criminal profit, achievability, and difficulty to defeat. Further information on these dimensions is available in the report.

 

High concern crimes

  • Audio/visual impersonation: Criminals can impersonate individuals through convincing audio or video manipulation, potentially leading to financial fraud or the manipulation of public opinion.
  • Driverless vehicles as weapons: The advent of autonomous vehicles presents an opportunity for terrorists to carry out coordinated attacks without human involvement.
  • Tailored phishing: AI-driven phishing attacks can craft highly convincing messages, making it difficult to distinguish between genuine and malicious communications.
  • Disrupting AI-controlled systems: As AI systems become integral to various sectors, criminals may target them, causing chaos, power failures, or financial disruptions.
  • Large-scale blackmail: AI can facilitate large-scale data harvesting and personal vulnerability identification, making traditional blackmail scalable.
  • AI-authored fake news: AI-generated fake news can manipulate public perception, though it may not always directly yield financial profit.

 

Medium concern crimes

  • Misuse of military robots: The use of military AI hardware by criminal or terrorist organisations poses a serious threat, though the extent remains uncertain.
  • Snake oil: Fraudulent services masquerading as AI-driven solutions can deceive organisations, but education can mitigate this threat.
  • Data poisoning: Deliberate manipulation of machine learning data can introduce biases, making it difficult to detect.
  • Learning-based cyber-attacks: AI enables specific and massive cyber-attacks, probing multiple systems simultaneously.
  • Autonomous attack drones: Autonomous drones controlled by AI could complicate criminal activities while keeping the perpetrator at a distance.
  • Online eviction: Denial of access to essential online services can be used for extortion or chaos creation.
  • Tricking face recognition: Criminals can exploit AI-driven face recognition systems by using techniques like morphing.
  • Market bombing: Manipulating financial markets via AI is complex, costly, and challenging to achieve, making it a medium concern.

 

Low concern crimes

  • Bias exploitation: Exploiting existing biases in algorithms.
  • Burglar bots: Small autonomous robots used for burglaries.
  • Evading AI detection: Undermining AI systems used by law enforcement or security services.
  • AI-authored fake reviews: Generating fake content to manipulate review scores.
  • AI-assisted stalking: Monitoring individuals’ location and activity.
  • Forgery: Generating fake content such as art or music.

 

Key takeaways

The study conducted by the Dawes Centre for Future Crime at UCL sheds light on the potential misuse of AI technology for criminal purposes. While AI has the power to revolutionise various industries, its misuse poses significant challenges to society. Law enforcement, policymakers, and technology developers must stay vigilant and proactive in addressing these emerging threats to safeguard our increasingly AI-dependent world.

Nyman Gibson Miralis provides expert advice and representation in cases involving alleged AI-enabled crimes.

Contact us if you require assistance.