Chat GPT criminal use

The release and widespread use of ChatGPT – a large language model (LLM) developed by OpenAI – has created significant public attention. ChatGPT has proven itself capable of complex creative work and offers great opportunities to legitimate businesses and members of the public. However, as with most technological developments, criminals and bad actors may wish to exploit them for malicious purposes.

The Europol Innovation Lab organised workshops to explore how criminals can abuse LLMs such as ChatGPT. This article explores the key findings in the recent Europol report published following the workshops, and how this new technology can be used to commit specific crimes.

 

Criminal use cases

ChatGPT excels at providing users with ready-to-use information in response to a wide range of prompts. If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering step-by-step information that can be used to facilitate a crime.

Europol identified that ChatGPT can be used to commit fraud, impersonation and social engineering, cybercrime, and to spread disinformation.

 

Fraud, impersonation and social engineering

ChatGPT’s advanced ability to generate highly authentic texts based on user prompts has made it an ideal tool for phishing purposes. This technology has made it possible for even those with basic English skills to create fraudulent emails that appear highly realistic and convincing, with context-specific content that can be adapted to various types of internet fraud.

This poses a significant threat as it allows criminals to create more sophisticated and targeted scams, and it can be used to conduct social engineering and generate fake social media engagement.

The use of LLMs such as ChatGPT allows for quicker and more authentic creation of these fraudulent communications, increasing their reach and effectiveness. The technology also enables the impersonation of specific individuals or groups, which can be used to mislead potential victims and gain their trust.

 

Cybercrime

ChatGPT can generate code in various programming languages, enabling the creation of basic tools for cybercrime purposes without technical knowledge. The safeguards preventing the model from generating malicious code can be bypassed if prompts are broken down into individual steps. Threat actors have already exploited ChatGPT’s ability to transform natural language prompts into working code, creating malware and a full infection flow.

The release of GPT-4 is expected to provide even more effective assistance for cybercriminals, with better understanding of code context and the ability to correct errors and fix mistakes. This presents a significant challenge for cybersecurity as even those with little technical knowledge can use advanced technology to automate sophisticated criminal operations.

 

Disinformation

ChatGPT is highly effective at producing large amounts of authentic-sounding text quickly, making it an ideal tool for propaganda and disinformation purposes. This allows users to generate and disseminate messages that reflect a specific narrative with minimal effort. For example, ChatGPT can be used to generate online propaganda on behalf of others to promote or defend certain views that have been debunked as disinformation or fake news.

 

Recommendations

While the Europol workshops focused on identifying potentially malicious use cases of ChatGPT that are already possible today, the purpose was also to analyse these findings and generate recommendations on how law enforcement can ensure better preparedness for what may still be to come.

It was identified that the law enforcement community needs to be aware of both the positive and negative implications of LLMs like ChatGPT. Awareness is crucial to identify and address potential loopholes and prevent malicious use. Law enforcement agencies should understand the impact of LLMs on different crime areas to predict and investigate abuse of this technology. They also need to develop the skills to assess the accuracy and potential biases of generated content.

Law enforcement agencies may also want to explore the use of customised LLMs for tailored use, provided that fundamental rights are respected and appropriate safeguards are in place.

 

Key takeaways

Large language models (LLMs) like ChatGPT may be used to facilitate crimes such as fraud, impersonation and social engineering, cybercrime, and to spread disinformation. Law enforcement agencies need to be aware of the possible threats, develop skills to better predict and mitigate specific threats, and potentially use this technology to help fight crime.

Nyman Gibson Miralis provides expert advice and representation in cases of alleged fraud, cybercrime, and other complex crimes.

Contact us if you require assistance.