It didn't take long. Just months after OpenAI's ChatGPT chatbot upended the startup economy, cybercriminals and hackers are claiming to have created their own versions of the text-generating technology. The systems could, theoretically at least, supercharge criminals' ability to write malware or phishing emails that trick people into handing over their login information.
Since the start of July, criminals posting on dark-web forums and marketplaces have been touting two large language models (LLMs) they say they've produced. The systems, which are said to mimic the functionalities of ChatGPT and Google's Bard, generate text to answer the questions or prompts users enter. But unlike the LLMs made by legitimate companies, these chatbots are marketed for illegal activities.
There are outstanding questions about the authenticity of the chatbots. Cybercriminals are not exactly trustworthy characters, and there remains the possibility that they're trying to make a quick buck by scamming each other. Despite this, the developments come at a time when scammers are exploiting the hype of generative AI for their own advantage.
View Full Article
No entries found