A new “blackhat” breed of generative artificial intelligence (AI) tools such as WormGPT and FraudGPT are reshaping the security landscape. With their potential for malicious use, these sophisticated models could amplify the scale and effectiveness of cyberattacks.
WormGPT: The blackhat version of ChatGPTWormGPT, based on the GPTJ language model developed in 2021, essentially functions as a blackhat counterpart to OpenAI's ChatGPT, but without ethical boundaries or limitations, according to SlashNext’s recent research.
The tool was allegedly trained on a broad range of data sources, with a particular focus on malware-related data. However, the precise datasets used in the training process remain confidential. And its developers boasted it offers features such as character support, chat memory retention and code formatting capabilities.
The SlashNext gained access to WormGPT through a prominent online forum often associated with cybercrime and conducted tests focusing on business email compromise (BEC) attacks.
In one instance, WormGPT was instructed to generate an email aimed at coercing an unsuspecting account manager into paying a fraudulent invoice. “The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” the team wrote in a blog post.
FraudGPT goes on offenseFraudGPT is a similar AI tool as WormGPT, which is marketed exclusively for offensive operations, such as spear phishing emails, creating cracking tools and carding (a type of credit card fraud).
The Netenrich threat research team discovered this new AI bot, which is now being sold across various dark web marketplaces and on the Telegram platform. Besides crafting enticing and malicious emails, the team highlighted the tool’s capability to identify the most targeted services or sites, thereby enabling further exploitation of victims.
FraudGPT’s developers claim it has diverse features including writing malicious code, creating undetectable malware, phishing pages and hacking tools; finding non-VBV bins; and identifying leaks and vulnerabilities. They also touted it has more than 3000 confirmed sales or reviews.
WormGPT and FraudGPT are still in their infancyWhile FraudGPT and WormGPT are similar to ChatGPT in terms of capabilities and technology, the key difference is these dark versions have no guardrails or limitations and are trained on stolen data, Forrester Senior Analyst Allie Mellen told SDxCentral.
“The only difference will be the goal of the particular groups using these platforms — some will use it for phishing/financial fraud and others will use it to attempt to gain access to networks via other means," echoed HYAS CTO David Mitchell.
FraudGPT and WormGPT have been getting attention since July. Mellen pointed out it’s still in the early stage for attackers to use these tools and too early to tell at this point if there is a demand.
“It's getting some interest from the cybercriminal community, of course, but it's more experimental than it is mainstream,” she said. “We will see new variations pop up that make use of the data or that are trained on different types of data to make it even more useful for them.”
How and where hackers can harness malicious GPT toolsMellen noted tools like FraudGPT and WormGPT will serve as a force multiplier and helper for attackers, but only when used correctly.
“Much like any other generative AI use case, it's important to note that these tools can be a force multiplier for users, but ultimately only if they know how to use them and only if they're willing to use them,” she said.
Here are some potential use cases Mellen listed:
- Enhanced phishing campaigns: One of the most basic uses is crafting phishing emails. Tools like WormGPT and FraudGPT present a new advantage in effective translation into various languages, which can ensure that it’s not only understandable but also enticing for the target, leading to higher chances of them clicking malicious links. Moreover, these tools make it easier for attackers to automate phishing campaigns at scale, eliminating manual effort.
- Accelerated open source intelligence (OSINT) gathering: Typically, attackers invest significant time in OSINT, where they gather information about their targets, such as personal or family details, company information and past history, aiding their social engineering efforts. With the introduction of tools like WormGPT and FraudGPT, this research process is substantially hastened by simply inputting a series of questions and directives into the tool without having to go through the manual work.
- Automated malware generation: WormGPT and FraudGPT are proving useful in generating code, and this capability can be extended to malware creation. Especially with platforms like FraudGPT, which might have access to previously hacked information on the dark web, attackers can simplify and expedite the malware creation process. Even individuals with limited technical expertise can prompt these AI tools to generate malware, thereby lowering the entry barrier into cybercrime.
“Historically, these attacks could often be detected via security solutions like anti-phishing and protective DNS [domain name system ] platforms,” HYAS’ Mitchell said in a statement. “With the evolution happening within these dark GPTs [Generative Pre-trained Transformers], organizations will need to be extra vigilant with their email & SMS messages because they provide an ability for a non-native English speaker to generate well-formed text as a lure.”
Threat landscape impactsWith all these emerging use cases, there will be more targeted attacks, Mellen noted. Phishing attempts will become more sophisticated and the rapid generation of malicious code by AI will likely result in more duplicate malware.
“I'd expect that we'll see more consistency with some of the malware that's in use, which will cause some issues because it has the potential to even more so obfuscate nation-state activity as people copy and use whatever it is that they can find, whatever it is that ChatGPT gets trained on,” she said.
“So there's a lot of potential that we'll see an increase in attacker activity, especially on the cybercriminal side as people who perhaps are not as sophisticated on the technology side or previously thought that the being cybercriminals, and accessible to them now have more opportunity there, unfortunately,” she added.
Don’t panic yet, but stay informed genAIDespite these daunting impacts, it's important not to panic, Mellen said, adding that much like any technological advancement, GPT tools can be a double-edged sword.
“It can be used for really positive things. It can also be used for really awful things. So it's another thing that CISO needs to consider and be concerned about,” she said. “But at the end of the day, it's just another tool and so don't go too crazy trying to change everything that you do, just make sure that the tools that you're using are protecting you as best you can and keep up to date with the current landscape.”
Mellen recommended organizations pay attention to and keep informed of the generative AI developments and what attackers are doing with it. “Understanding as much as we can now and keeping up to date on the actions that they're taking is pivotal.”