Cryptocurrency

Criminals Have Created Their Own ChatGPT Clones


It didn’t take long. Just months after OpenAI’s ChatGPT chatbot upended the startup economy, cybercriminals and hackers are claiming to have created their own versions of the text-generating technology. The systems could, theoretically at least, supercharge criminals’ ability to write malware or phishing emails that trick people into handing over their login information.

Since the start of July, criminals posting on dark-web forums and marketplaces have been touting two large language models (LLMs) they say they’ve produced. The systems, which are said to mimic the functionalities of ChatGPT and Google’s Bard, generate text to answer the questions or prompts users enter. But unlike the LLMs made by legitimate companies, these chatbots are marketed for illegal activities.

There are outstanding questions about the authenticity of the chatbots. Cybercriminals are not exactly trustworthy characters, and there remains the possibility that they’re trying to make a quick buck by scamming each other. Despite this, the developments come at a time when scammers are exploiting the hype of generative AI for their own advantage.

In recent weeks, two chatbots have been advertised on dark-web forums—WormGPT and FraudGPT—according to security researchers monitoring the activity. The LLMs developed by large tech companies, such as Google, Microsoft, and OpenAI, have a number of guardrails and safety measures in place to stop them from being misused. If you ask them to generate malware or write hate speech, they’ll generally refuse.

The shady LLMs claim to strip away any kind of safety protections or ethical barriers. WormGPT was first spotted by independent cybersecurity researcher Daniel Kelly, who worked with security firm SlashNext to detail the findings. WormGPT’s developers claim the tool offers an unlimited character count and code formatting. “The AI models are notably useful for phishing, particularly as they lower the entry barriers for many novice cybercriminals,” Kelly says in an email. “Many people argue that most cybercriminals can compose an email in English, but this isn’t necessarily true for many scammers.”

In a test of the system, Kelly writes, it was asked to produce an email that could be used as part of a business email compromise scam, with a purported CEO writing to an account manager to say an urgent payment was needed. “The results were unsettling,” Kelly wrote in the research. The system produced “an email that was not only remarkably persuasive but also strategically cunning.”

In forum posts, the WormGPT developer claimed the system was built on the GPTJ language model, an open source language model that was developed by AI research group EleutherAI in 2021. They refused to disclose the data sets they used to train the system, according to Kelly’s research.



Source link

Leave a Response