AI Agents Now Automate 'Janitorial Work' for Cybercriminals and North Korean Hackers
REDMOND, Wash. — Microsoft has observed cybercriminals and nation-state actors, including North Korea-linked groups, using AI agents to handle tedious operational tasks in cyberattacks, effectively turning the technology into a force multiplier for malicious operations. Sherrod DeGrippo, Microsoft's general manager of global threat intelligence, told The Register that attackers will adopt AI for whatever gets them their objectives "easiest and fastest." The development allows groups like North Korea's Coral Sleet to rapidly create and manage attack infrastructure at scale.
According to Microsoft's threat intelligence, Coral Sleet — one of the crews behind the ongoing fake IT worker scam — is leveraging development platforms and agentic AI capabilities to automate infrastructure deployment and management. This includes creating fake company websites, remotely provisioning servers, testing malicious payloads, and orchestrating full attack workflows without constant human oversight.
The use of AI extends beyond infrastructure to the fake IT worker operations themselves. North Korean operatives reportedly employ AI tools for face swapping to generate polished headshots for resumes and CVs, as well as to draft daily emails and maintain employment at Western firms. Microsoft has documented how groups like Jasper Sleet use AI across the entire attack lifecycle — from getting hired to maintaining access and misusing credentials at scale.
AI as Force Multiplier in Evolving Scams
DeGrippo emphasized in the interview that AI agents excel at the "janitorial-type work" required to plan and execute cyberattacks, freeing human operators to focus on higher-level strategy. This automation has breathed new life into longstanding North Korean IT worker scams, which involve DPRK nationals securing remote IT positions at Western companies to steal data or conduct other malicious activities.
Security experts note that organizations are increasingly catching on to traditional indicators of these scams, forcing attackers to upgrade their techniques with AI. Brian Hussey, senior vice president of Cyber Fusion at Cyderes, told Dark Reading that attackers will continue integrating AI into these operations as defensive measures improve.
The trend aligns with broader observations of AI being used in sophisticated espionage. In mid-September 2025, Anthropic reported disrupting what it described as the first reported AI-orchestrated cyber espionage campaign, where attackers used agentic AI not merely as an advisor but to actively execute attacks.
Implications for Cybersecurity Defenders
For defenders, the integration of AI agents into adversary toolkits raises the operational tempo and scale of attacks. Tasks that once required significant manual effort — such as infrastructure provisioning and payload testing — can now be automated, allowing smaller teams or resource-constrained nation-state groups to conduct more frequent and sophisticated campaigns.
Microsoft's findings highlight how AI lowers barriers for both cybercriminals and state-sponsored actors. The technology enables rapid iteration on attack infrastructure and helps maintain the facade of legitimate employment in remote work scams.
The development comes as organizations continue to grapple with the dual-use nature of AI tools. While many companies are exploring AI agents for legitimate productivity gains, the same underlying capabilities are being repurposed for malicious ends.
What's Next
Microsoft and other security firms expect continued evolution in how nation-state actors, particularly from North Korea, integrate AI into their operations. As detection methods improve, attackers are likely to further refine their use of AI for evasion and automation.
Industry observers anticipate that AI-driven attack automation will become more prevalent across the threat landscape, requiring defenders to develop new detection strategies focused on anomalous infrastructure provisioning patterns and AI-assisted social engineering.
Security teams are advised to strengthen identity verification processes, enhance monitoring of remote worker activity, and implement stricter controls on infrastructure deployment to counter these emerging automated threats.
Sources
- Manage attack infrastructure? AI agents can now help • The Register
- Microsoft Warns North Korean Agents Use AI to Land Western IT Jobs
- North Korean agents using AI to trick western firms into hiring them, Microsoft says | Technology sector | The Guardian
- Disrupting the first reported AI-orchestrated cyber ...
- North Korean APTs Use AI to Enhance IT Worker Scams
