It was only a matter of time before AI chatbots were emulated for malicious purposes – and one such tool is now on the market, known as WormGPT. ZDNET screenshot via Twitter
When ChatGPT was made available to the public on November 30, 2002, the AI chatbot took the world by storm.
the software was developed by OpenAI, an AI and research company. ChatGPT is a natural language processing tool capable of answering questions and providing information based on datasets sourced from datasets including books and online web pages, and has since become an important tool for on-the-fly information gathering, analysis and writing tasks for millions of users worldwide.
Too: The 5 Best VPN Services (And Tips for Choosing the Right Service for You)
While some experts believe that the technology may prove capable of reaching the level of internet disruption, others say that ChatGPT exhibits ‘confidential inaccuracy’. A large number of students have been caught plagiarizing courses through the tools, and until the datasets are verified, tools like ChatGPT can become unwitting tools for spreading misinformation and propaganda.
Actually, the U.S. federal trade commission (FTC) is investigating Open AI over its handling of personal information and the data used to build its language model.
However, beyond data security concerns, whenever a new technological innovation is introduced, avenues for abuse also emerge. It was only a matter of time before AI chatbots were emulated for malicious purposes – and one such tool is now on the market, known as WormGPT.
Published on July 13 by researchers at cybersecurity firm SlashNext a blog post Revealing the discovery of WormGPT, a tool being promoted for sale on a hacker forum.
According to a forum user, the WormGPT project aims to be a blackhat “alternative” to ChatGPT, “that lets you do all kinds of illegal work and easily sell it online in the future.”
Too: Scammers are using AI to impersonate your loved ones. what to pay attention to here
SlashNext obtained access to the tool, which is described as an AI module based on the GPTJ language model. WormGPT has reportedly been trained with data sources including malware-related information – but the specific datasets are known only to WormGPT’s author.
It may be possible for WormGPT to generate malicious code, for example, or convincing phishing emails.
WormGTP has been described as “similar to ChatGPT but with no ethical limits or boundaries.”
ChatGPT has a set of rules to prevent users from unethically misusing the chatbot. This includes refusing to complete tasks related to criminality and malware. However, users are constantly looking for ways to get around these limits.
Too: According to one study, GPT-4 is getting weaker over time
The researchers were able to use WormGPT “for the purpose of generating an email to pressure an unwitting account manager into paying a fraudulent invoice.” The team was surprised by how well the language model managed the task, describing the result as “remarkably persuasive (and) strategically clever”.
While he didn’t say whether he attempted a malware-writing service, it’s plausible that the AI bot could — given that ChatGPT’s limitations don’t exist.
Too: Gmail will now help you compose emails: How to access Google’s new AI tool
According to a Telegram channel allegedly launched to promote the tool, posts seen by ZDNET indicate the developer is creating a subscription model for access ranging from $60 to $700. One channel member, “darkstux”, alleges that WormGPT already has over 1,500 users.
No, ChatGPT is developed by OpenAI, a legitimate and respected organization. WormGMT is not his creation and is an example of how cyber criminals can take inspiration from advanced AI chatbots to develop their own malicious tools.
Even in the hands of novices and your typical scammers, natural language models can turn basic, easily avoidable phishing and BEC scams into sophisticated operations that are more likely to succeed. There’s no doubt that cyber criminals will move where there’s money to be made – and WormGPT is just the beginning of a new range of tools for cybercriminals determined to trade in underground markets.
Too: 6 skills you need to become an AI prompt engineer
It’s also unlikely that WormGPT is the only one.
- europolEuropol said in a 2023 report, “The Impact of Large Language Models on Law Enforcement”, “Monitoring developments will be important, as dark LLMs (large language models) trained to facilitate harmful output may become a major criminal business of the future.” models. This poses a new challenge to law enforcement, making it easier than ever for malicious actors to conduct criminal activities without the necessary prior knowledge.”
- federal trade commission: The FTC is probing ChatGPT maker OpenAI over data use policies and inaccuracy.
- UK National Crime Agency (NCA): The NCA has warned that AI could lead to an explosion in youth and the risk of abuse.
- UK Information Commission Office (ICO): The ICO has reminded organizations that their AI tools are still bound by existing data protection laws.
Not without covert tactics, but with the right prompts, many natural language models can be coaxed into particular actions and actions.
Too: Detecting deepfakes in real time: How Intel Labs uses AI to fight misinformation
For example, ChatGPT can create professional emails, cover letters, resumes, purchase orders, and more. This alone can give away some of the most common indicators of phishing emails: spelling mistakes, grammatical problems, and secondary language problems. In itself, this alone can cause a headache for businesses trying to trace and train their employees to recognize suspicious messages.
SlashNext researchers say, “Cyber criminals can use such technology to automate the creation of highly credible fake emails personalized for the recipient, increasing the likelihood of attack success.”
Too: 7 advanced ChatGPT prompt-writing tips you need to know
For step-by-step instructions on using ChatGPT for legitimate purposes, check out ZDNET’s guide on how to start using ChatGPT.
ChatGPT is free to use. The tool can be used to answer common questions, write content and code, or generate leads for everything from creative stories to marketing projects.
There is a subscription option ChatGPT Plus, for which users can sign up. Membership costs $20 per month and gives users access to ChatGPT during peak hours and otherwise, faster response times, and priority access to fixes and improvements.
Too, How to Access, Install, and Use AI ChatGPT-4 Plugins (And Why You Should)











