New developments in the cybercriminal use of AI

Generative AI tools have the potential to enable truly disruptive cyberattacks in the near future. But are there already new criminal LLMs or offerings of ChatGPT-like capabilities in hacking software today?

AI-generated image of a hacker: This is still harmless in contrast to other possibilities used by cyber criminals. (Image: Pixabay.com)

Artificial intelligence offers many opportunities and has great user potential. But there is also the other side of the coin: AI can also be used for criminal purposes. An analysis by Trend Micro shows the latest developments and highlights the threats that can be expected in the near future.

Jailbreaking-as-a-Service

While AI technologies are rapidly gaining acceptance in the business world, attempts to develop their own cybercriminal Large Language Models (LLMs) were largely abandoned in the cybercrime world last year. Instead, criminals shifted their focus to "jailbreaking" existing models, i.e. using special tricks to get them to bypass their integrated security measures. There are now offers such as Jailbreaking-as-a-Service. Criminals use sophisticated techniques to get LLMs to respond to requests that should actually be blocked. These techniques range from role-playing games and hypothetical scenarios to the use of foreign languages. Service providers such as OpenAI or Google are working on closing these security gaps. Cybercriminal users, in turn, have to resort to more sophisticated jailbreaking prompts. This has created a market for a new class of criminal services in the form of chatbot offerings for jailbreaking.

"Cybercriminals have been abusing AI long before the recent hype around generative AI in the IT industry. That's why we delved into the criminal underground forums to find out how cybercriminals actually use and deploy AI to achieve their goals and what kind of AI-powered criminal services are being offered," explains David Sancho, Senior Threat Researcher at Trend Micro. "We've looked at the underground conversations about AI and found that interest in generative AI has followed general market trends, but adoption seems to be lagging behind. We've also seen LLM offerings from criminals for criminals. These include FraudGPT, DarkBARD, DarkBERT and DarkGPT, which have many similarities. For this reason, we suspect that they most likely function as wrapper services for the legitimate ChatGPT or Google BARD - we call them Jailbreaking-as-a-Service services," David Sancho continued. "We have also investigated other potentially fake criminal LLM offerings: WolfGPT, XXXGPT and Evil-GPT. We are also looking at deepfake services for criminals: We've seen pricing and some early business models around these AI-powered fake images and videos."

Deepfake services on the rise

Deepfakes have been around for some time, but only recently have real cybercriminal offers been discovered. Criminals are offering deepfake services to bypass identity verification systems. This is becoming an increasing problem, particularly in the financial sector, as banks and cryptocurrency exchanges demand ever more stringent checks. Deepfakes are becoming cheaper and easier to create. Cybercriminals are using this technology to create fake images and videos that can fool even advanced security systems. A stolen ID document is often enough to create a convincing fake image.

What does this mean for the future?

Developments show that criminals are constantly finding new ways to misuse AI technologies. Although there has been no major disruption so far, it is only a matter of time before more serious attacks can be expected. Companies and private individuals must therefore remain vigilant and constantly improve their cyber security measures in order to be prepared for these threats. Three fundamental rules of cybercriminal business models will determine when malicious actors target GenAI on a large scale:

  1. Criminals want an easy life: The aim is to achieve a certain economic result with as little effort and as little risk as possible.
  2. New technologies must be better than existing tools: Criminals only adopt new technologies if the return on investment is higher than with existing methods.
  3. Evolution instead of revolution: Criminals prefer gradual adjustments rather than comprehensive revisions in order to avoid new risk factors.

Conclusion: cybercriminal use of AI is only just beginning

The need for secure, anonymous and untraceable access to LLMs remains. This will encourage cybercriminal services to keep exploiting new LLMs that are easier to jailbreak or tailored to their specific needs. There are currently more than 6,700 readily available LLMs on the AI community platform Hugging Face. It can also be assumed that more and more old and new criminal tools will integrate GenAI functions. Cybercriminals have only just begun to scratch the surface of the real possibilities that GenAI offers them.

Trend Micro has compiled further information on this topic in a blog:

(Visited 278 times, 1 visits today)

More articles on the topic