Cybercriminals in Disagreement over GPTs
A recent investigation by Sophos X-Ops sought to understand how threat actors are using large language models (LLMs) like GPT in the criminal underground. The investigation focused on discussions and opinions on four prominent criminal forums—Exploit, XSS, Breach Forums, and Hackforums.
The investigation found that there are several GPT derivatives being offered on these forums, including EvilGPT, DarkGPT, PentesterGPT, and XXXGPT. However, there was skepticism about some of these derivatives, with allegations of scams.
Threat actors expressed skepticism about tools like ChatGPT, arguing that they are overrated, overhyped, and unsuitable for generating malware. They also had concerns about the operational security of LLM-generated code and its detection by AV/EDR systems.
The investigation only found a few examples of threat actors using LLMs to generate malware and attack tools, all in a proof-of-concept context. Instead, many threat actors found other uses for LLMs, such as mundane coding tasks.
Social engineering attacks and tangential security-related tasks were the most common real-world applications for LLMs mentioned. However, these applications remain aspirational for the most part.
It’s worth noting that the investigation found a significant number of compromised ChatGPT accounts for sale, indicating that threat actors are seizing the opportunity to compromise and sell accounts on new platforms.
The investigation also discovered that WormGPT and FraudGPT are not the only players in town. Other LLM-based chatbot services, such as EvilGPT and XXXGPT, have emerged. However, there were mixed reactions to these tools, with some users expressing keen interest while others accused the developers of being scammers.
Overall, the investigation revealed that threat actors have a range of opinions and use cases for LLMs. While some are eager to experiment and play with these models, many are cautious and see LLMs as still being in their infancy. The concerns raised by threat actors include skepticism about the capabilities of LLMs, operational security worries, and ethical considerations.