top of page

AI’s Dark Side: Global Threat Actors Use ChatGPT and LLMs to Scale Cyberattacks and Automate Scams


ai

The fast adoption of Large Language Models (LLMs) such as ChatGPT by bad actors is intensifying cyber threats all over the world, and recent reports indicate that the spies of countries and organized criminal gangs are integrating AI as a global threat to dramatically amplify the size and effectiveness of their existing activities. This trend is defining a global threat environment, which is more automated and sophisticated.

This shift is recorded in the recent intelligence reports of major technology companies, such as OpenAI and Microsoft, which outline the increasing usage of generative AI to mechanize the most important processes in cyber campaigns. Actions that are observed demonstrate actors do not develop some fresh attacks, they are just gluing AI tools to the existing malicious playbooks. The complaints regarding the technology are:

  • Malware Refinement: Russian-language cyber crews have been actively attempting to apply models to hone elements of high-end malware, including credential stealer and remote-access trojan malware.

  • Espionage Automation: The Chinese government-affiliated groups are training AI to create more of these phishing messages and context-aware, and debug complicated code, making their intelligence-collection tasks easier.

  • C2 Development: operators using Korean language have been discovered making use of the LLMs to create advanced command-and-control (C2) systems to their networks.

To make the menace even more intense, advanced organized crime syndicates operating in other regions such as the Southeast Asian are automating huge financial frauds using AI. These organizations apply the models to decode poorly written scam messages into fluent and persuasive English and to develop believable artificial media and elaborate fake investor personalities of falsely alleged investment ventures. The technical barrier is also reduced due to the simplicity of creating custom, high-quality content, and Microsoft states that now more than a half of the attacks with known motives are extortion and ransomware-motivated.


Roblox has responded to the rising controversy by implementing better safety measures. At the beginning of September, the company introduced a new age estimation feature, a machine-learning-based tool that can group users based on age. This feature, combined with the implementation of the new direct messaging controls represents the element of the work of Roblox to address the safety issue and transform the platform into a more safe environment in which its younger audience could enjoy it.


On the one hand, AI is utilized to boost the speed and scope of attackers, but the technology is also becoming an important instrument to protect the world. In a recent report, OpenAI reported that its models were utilized by regular users more often on three occasions than they were by malicious actors to create scams.

Despite such efforts, critics argue that such measures of Roblox are insufficient and the company had taken into account its expansion and profitability long before the safety of the weakest users. The legal action and media furor are some of the existing debates regarding whether a technology company can advocate children online or not. An outcome of such suing cases is expected to give historic precedents of how online platforms will be held accountable to ensure children safety in future.




Want more cybersecurity guides?

Subscribe to our newsletter!


Recent Posts

bottom of page