Cybercriminals are using a new AI-based tool to create convincing phishing emails and launch business email compromise (BEC) attacks, according to a report by security firm SlashNext. The tool, called WormGPT, is based on the same technology as ChatGPT, a popular chatbot that generates realistic and engaging conversations in multiple languages.
However, unlike ChatGPT, which has some ethical boundaries and limitations, WormGPT has none. It can impersonate anyone, from CEOs to customers, and craft grammatically flawless messages that can trick even the most vigilant recipients.
This is a large language model that is not based on OpenAI’s technology. It uses GPT-J, an open-source model that was created in 2021 and has more than 6 billion parameters. It can support any character, remember chat conversations, and format code. It is comparable to an older version of GPT-3 in terms of performance.
WormGPT uses a technique called natural language generation (NLG), which allows it to produce text based on a given input or context. For example, it can take a subject line, a sender name, and a recipient name, and generate a convincing email body that matches the tone and style of the sender. It can also adapt to different languages and dialects, making it more effective for targeting global audiences.
According to SlashNext, WormGPT is a service that requires a subscription and charges 100 euros per month or 550 euros per year. Alternatively, adversaries can opt for a “private setup” that costs 5000 euros. The developer offers a 5 percent discount with the coupon code “SAGE.” Interested parties can reach out to the developer via Telegram.
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” writes Daniel Kelley, a reformed black hat computer hacker collaborating with the SlashNext team.
SlashNext warns that WormGPT poses a serious threat to both individuals and organizations, as it can bypass traditional phishing defenses and exploit human vulnerabilities. The firm advises users to be extra cautious when opening emails from unknown or unexpected senders, and to verify the identity and authenticity of the sender before clicking on any links or attachments. Additionally, users should use strong passwords and multi-factor authentication for their online accounts, and report any suspicious emails to their IT department or security provider.
“We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors,” Kelley said.
Keep in touch with our blog to read the latest news and innovations in the cybersecurity world.
Photo by Alejandro Escamilla on Unsplash.
Facebook: Eagle Tech Corp
LinkedIn: Eagle Tech
YouTube: Eagle Tech Corp