Business

How Hackers Abusing ChatGPT Features For Their Cybercriminal Activities – Bypass Censorship

Published

on

Media and frequent innovative releases aggressively fuel the rapid industry rise of generative AI (Artificial Intelligence) ChatGPT. 

But, besides its innovative part, cybercriminals have also actively exploited these generative AI models for several illicit purposes, even before their rise.

Cybersecurity analysts at Trend Micro, Europol, and UNICRI jointly studied criminal AI exploitation, releasing the “Malicious Uses and Abuses of Artificial Intelligence” report a week after GPT-3’s debut in 2020.

The launch of AI models like GPT-3 and ChatGPT stormed the tech industry, generating a wave of LLMs and tools, competing with or complementing OpenAI, focusing on enhancement and attack strategies.

Hackers Abusing ChatGPT

Cybersecurity analysts recently noted the multitude of gossip around ChatGPT for both developers and threat actors, showcasing its active demand and a broad range of capabilities.

Threat actors speed up their coding with ChatGPT by requesting the model to generate specific functions, and then they integrate the AI-generated code into malware.

Bot that was fully programmed by ChatGPTbot (Source – Trend Micro)

ChatGPT is excellent at creating convincing text, exploited in spam and phishing by cybercriminals who offer custom ChatGPT interfaces for crafting deceptive emails.

Researchers found GoMailPro, a tool used by cybercriminals for sending spam, reportedly integrated ChatGPT for drafting spam emails, as announced by its author on April 17, 2023.

GoMailPro allegedly integrates ChatGPT (Source – Trend Micro)

Due to censorship limitations, ChatGPT avoids illegal and controversial topics, which limits its usefulness. That’s why to address this, threat actors are crafting and sharing prompts that evade censorship for illicit purposes.

On Hack Forums’ ‘Dark AI’ section, users discuss and share ChatGPT jailbreak prompts like ‘FFEN’ (Freedom From Everything Now) to bypass ethical limitations under the following threat:-

  • DAN 7.0 [FFEN]
ChatGPT jailbreak prompt (Source – Trend Micro)

Starting June 2023, several underground forum threat actors offer criminal-oriented language models with the capabilities like:-

  • Tackling anonymity
  • Censorship evasion
  • Malicious code generation

While the legitimacy varies, it poses a challenge to identify true LLMs versus ChatGPT-based wrappers potentially used for scams.

Malicious AI Models

Besides the Evil-GPTWormGPTFraudGPTXXXGPT, and Wolf GPT, recently, security analysts also found the following models with their respective prices on July 27, 2023:-

  • FraudGPT: $90/month
  • DarkBARD: $100/month
  • DarkBERT: $110/month
  • DarkGPT: $200/lifetime subscription

Threat actors also employ AI for deep fakes, swapping faces in videos to deceive for extortion, fake news, or better social engineering.

The use of AI for illicit purposes by threat actors is still in its early days; in short, it’s not groundbreaking like in other sectors. 

These models accelerate the cybercrime entry, with scams mixing with genuine tools, and prominent models like ChatGPT might even assist the threat actors in distinguishing legit AI services.

Source: https://cybersecuritynews.com/hackers-abusing-chatgpt/

Click to comment
Exit mobile version