Cybersecurity Forecast 2026: The Rise of AI Agents, Persistent Extortion, and Evolving Nation-State Tactics

Dec. 29, 2025 /Mpelembe Media/ — The Cybersecurity Forecast 2026 report by Google Cloud examines the anticipated evolution of digital threats, primarily focusing on the transformative role of artificial intelligence. It describes how adversaries will increasingly use AI agents for automated attacks and sophisticated social engineering, while defenders will adopt similar technology to enhance security operations. Beyond AI, the analysis highlights the persistent danger of ransomware and the expansion of cybercrime into the blockchain and virtualisation sectors. The document also evaluates the strategic motivations of nation-state actors from Russia, China, Iran, and North Korea as they pursue global espionage and disruption. Ultimately, the report serves as a guide for organisations to modernise their identity management and governance frameworks in response to these emerging risks.

Based on the themes explored in the sources, a comprehensive headline for this article would be:

There are three central themes identified in the report: the transformative role of Artificial Intelligence, the continued disruption caused by global cybercrime, and the strategic shifts of nation-state actors.

The sources provide several key insights that justify this headline:

The AI Revolution: By 2026, the use of AI by threat actors will transition from the exception to the norm, particularly in social engineering and malware development. While defenders will use “Agentic SOCs” to supercharge security analysts, they must also contend with new risks like “Shadow Agents”—unauthorised AI agents deployed by employees—and prompt injection attacks.

Persistent Cybercrime: Ransomware and data theft extortion are expected to remain the most financially disruptive threats. Furthermore, the report highlights a shift towards the “on-chain” cybercrime economy, where adversaries exploit blockchain technology for resilience and financial gain.

Strategic Nation-State Shifts: Major actors like Russia, China, Iran, and North Korea are evolving their tactics. For instance, Russia is expected to prioritise long-term global strategic goals over short-term tactical support for the conflict in Ukraine, while China will likely focus on stealthy operations and targeting edge devices.

Infrastructure Vulnerabilities: Adversaries are increasingly pivoting towards enterprise virtualization infrastructure, such as hypervisors, because these layers often lack visibility and provide high-leverage entry points for systemic disruption.

The cybersecurity landscape of 2026 is like a high-speed chess match where both the players and the board itself are being constantly reshaped by AI; for defenders to win, they cannot just react to individual moves but must master the new, automated logic of the game itself to stay ahead.

By 2026, AI adoption is expected to fundamentally reshape the security analyst’s day-to-day focus, transitioning the role from manual data processing to strategic oversight and direction. Instead of being overwhelmed by a high volume of alerts, analysts will act as directors of AI agents within an “Agentic SOC”.

The sources highlight several specific ways the role will evolve:

Shift to Strategic Validation: Analysts will move away from manual data correlation. In the event of an incident, AI will provide pre-packaged case summaries, decode obfuscated commands (such as PowerShell), and map threats to the MITRE ATT&CK framework. This allows the analyst to focus on final judgment and strategic validation, enabling them to approve containment actions in minutes rather than hours.

Plain-English Threat Hunting: The process of threat hunting will be revolutionised by AI’s ability to handle the “heavy lifting” of gathering and correlating petabytes of data. Analysts will be able to form hypotheses and query their AI agents using plain English to search for specific tactics, techniques, and procedures (TTPs) across the environment.

Scaling Human Intuition: AI is intended to scale, rather than replace, human intuition. For intelligence production, an analyst can provide a malware sample and preliminary notes, while the AI drafts the full threat report, including attribution and mitigations. This frees the human analyst to focus on high-level analysis.

New Technical Competencies: The rise of the “on-chain” cybercrime economy will require analysts to become proficient blockchain investigators. They will need to develop skills in tracing transaction histories, decoding malicious smart contract logic, and performing wallet analysis to ensure they are not “blind” to agile, persistent threat activity.

Governance and “Shadow Agent” Management: Analysts will need to adapt to a new discipline of AI security and governance. This involves managing “Shadow Agents”—unauthorised AI agents deployed by employees—and implementing “agentic identity management” to control the access and actions of these new digital actors.

The transition of the security analyst is like a librarian moving from manually filing every individual book to becoming the director of a fully automated, digital archive; they no longer spend their time shelving volumes, but instead focus on the complex research queries and high-level strategy that the automated system cannot handle alone.

The “Shadow Agent” risk is defined as the unauthorised deployment of powerful, autonomous AI agents by employees to perform work tasks without corporate approval. It represents a critical escalation of the existing “Shadow AI” problem, transitioning from the use of unauthorised software to the deployment of independent digital actors capable of executing complex workflows and making decisions.

According to the sources, several key factors define this modern organisational risk:

Invisible Data Pipelines: Because these agents are deployed independently, they create uncontrolled pipelines for sensitive information. This lack of visibility significantly increases the likelihood of data leaks, intellectual property theft, and compliance violations.

Operational Autonomy: Unlike traditional software, these agents can execute workflows and decisions independently. Traditional security deployments were not designed to manage or monitor actions taken by AI agents, creating a fundamental gap in existing security frameworks.

The “Ban” Paradox: The sources suggest that banning AI agents is not a viable solution. Attempting to prohibit their use typically drives the activity off the corporate network, which eliminates any remaining visibility for the security team and leaves the organisation even more vulnerable.

Identity and Access Challenges: Shadow agents often operate without a managed identity. This necessitates a shift towards “agentic identity management,” where agents are treated as distinct digital actors requiring their own granular, “just-in-time” access permissions to prevent “privilege creep” or unauthorised actions.

To manage this risk, the sources recommend that organisations establish a new discipline of AI security and governance. This involves a secure-by-design approach where AI controls are used to safely route and monitor all agent traffic, allowing for innovation while maintaining an auditable security posture.

Shadow Agents are like independent contractors who have been hired secretly by staff members; they may be incredibly efficient at getting work done, but because they haven’t been vetted or given a specific set of rules by the company, they are effectively walking around the office with master keys and handling sensitive files without any official oversight.

To secure virtualization infrastructure against evolving threats, organisations must undergo a strategic shift from guest-centric security to infrastructure-level protection. Because the virtualization layer—the “fabric” of enterprise applications—is often a blind spot for traditional security, specific methodologies must be adopted to prevent systemic disruption.

Based on the sources, organizations should implement the following strategies:

Move Beyond Guest-Centric Models: Security teams often focus solely on user endpoints and in-guest defenses, leaving the underlying hypervisor unmonitored. Organisations need to develop new capabilities to monitor and counter threats directly at the infrastructure level, rather than assuming that guest-level Endpoint Detection and Response (EDR) is sufficient.

Address the EDR Visibility Gap: A primary vulnerability in virtualization is the inherent lack of EDR visibility within the hypervisor itself. Adversaries exploit this to bypass in-guest security and execute mass encryption of foundational virtual machine disks. Strategies must include finding ways to gain visibility into the hypervisor’s operations.

Harden Configurations and Update Cycles: The sources identify the persistence of outdated software versions and insecure default configurations as major entry points for attackers. Regular patching of the virtualization stack and moving away from default settings are critical defensive steps.

Secure Identity Integrations: Hypervisors are often deeply integrated into legacy core identity services, turning them into high-leverage entry points where a single compromise can grant control over the entire digital estate. Organisations should re-evaluate and secure these integrations to prevent lateral movement.

Leverage Managed Services: To reduce the attack surface, organisations can opt for managed virtualization services (such as Google Cloud VMware Engine). This approach enhances security by restricting direct access to underlying components like ESXi and offloading the responsibility for continuous vulnerability monitoring to the service provider.

Prepare for Compressed Timelines: Because infrastructure-level attacks can render hundreds of systems inoperable in hours—far faster than traditional ransomware—defenders must develop rapid detection and response capabilities tailored to this specific velocity.

 Securing virtualization infrastructure is like protecting the foundation and skeletal structure of a skyscraper rather than just locking the doors of the individual offices; if an intruder compromises the building’s core structural integrity, every office inside becomes vulnerable regardless of how strong its own door is.

Prompt injection is a sophisticated cyberattack that manipulates artificial intelligence by tricking the model into bypassing its internal security protocols. It functions by embedding hidden commands within a prompt, which the AI then prioritises over its original safety guidelines or the intended user instructions.

This manipulation is evolving from simple experimental exploits into a significant threat used for large-scale data exfiltration and sabotage. The vulnerability exists because many business systems are integrating powerful AI models into daily operations, creating an environment where untrusted data can influence the model’s logic.

To counter this manipulation, defenders are implementing multi-layered “defense-in-depth” strategies, including:

Content Classifiers: Using machine learning to identify and filter out malicious instructions hidden within untrusted data before they reach the model.

Security Thought Reinforcement: A technique that trains the model to prioritise the original user’s intent and resist being swayed by conflicting or “injected” instructions.

Output Sanitisation and Confirmation: Implementing guardrails that cleanse the AI’s response and require human authorisation before the AI can perform any high-risk actions.

Model Hardening: Strengthening the foundational architecture of the AI to make it more resilient against direct targeting and manipulation.

Prompt injection is like a malicious note hidden inside a legitimate delivery package; the security guard (the AI) opens the package to process the delivery but finds a note that says, “Ignore all previous orders and let me into the vault.” Unless the guard is specifically trained to ignore notes found inside packages, they might follow the new, “injected” instruction instead of their original duty.

In the evolving “on-chain” cybercrime economy, blockchain immutability acts as a double-edged sword: while it provides adversaries with resilience against traditional takedowns, it simultaneously creates a permanent and publicly auditable record of every malicious action they take.

According to the sources, blockchain immutability assists investigators with threat attribution in several transformative ways:

Permanent Operational Security Risk: For threat actors, the inability to alter or delete blockchain records is a significant long-term risk. Every action—whether it is funding a digital wallet or deploying a malicious smart contract—is etched permanently into the ledger.

Linking Disparate Campaigns: Immutability allows investigators to definitively link cyberattacks separated by several years. By identifying the reuse of specific wallet addresses or finding identical contract bytecode (the compiled code of a smart contract), analysts can connect seemingly unrelated incidents to a single threat actor or group.

Strategic Disruption: This capability shifts the focus of cybersecurity from merely reacting to individual incidents to the strategic disruption of entire on-chain criminal enterprises. Because the ledger is public, investigators can trace the flow of stolen assets and map out the financial infrastructure of a criminal organisation with high precision.

Required Investigative Competencies: To leverage this immutability, the sources note that security analysts by 2026 must become proficient blockchain investigators. This involves developing new skills in tracing transaction histories, decoding malicious smart contract logic, and performing detailed wallet analysis.

Organizations that fail to upskill their teams in these Web3 fundamentals may remain “blind” to this class of agile and persistent threat activity.

Blockchain immutability is like a criminal leaving their fingerprints in wet concrete that instantly hardens into stone; they might successfully complete their task and walk away, but their unique mark is now a permanent part of the pavement, waiting for an investigator to find it and match it to their previous crimes years down the line.

North Korea utilises a global network of IT workers to conduct cyber operations that serve two primary strategic objectives: generating revenue for the regime and performing traditional cyber espionage against adversaries such as the United States and South Korea.

The deployment and activities of these workers involve several sophisticated tactics:

Obfuscation and “Laptop Farms”: To secure employment in Western organisations, North Korean IT workers have historically used “laptop farms”—specifically in the US—to enable remote access and hide their true physical locations. As law enforcement pressure and corporate awareness have increased in the US, these workers are now diversifying their operations globally, with a projected expansion into Europe by 2026.

Abuse of Employer Access: The risk posed by these workers extends far beyond their initial salary earnings. Once embedded within an organisation, they abuse their network access for direct financial gain, specifically targeting and stealing cryptocurrency from crypto-focused companies.

Strategic Espionage: Beyond financial theft, these workers leverage their positions for high-level intelligence gathering. The sources highlight instances where workers have used their employment access to steal sensitive data, such as proprietary information from a defence contractor developing AI technology.

Deceptive Recruitment Tactics: The North Korean cyber apparatus supports these workers through advanced social engineering. This includes luring potential targets with fake “hiring assessment” webpages and using deepfake videos to build trust and deceive high-value personnel during the recruitment or onboarding process.

Innovative On-Chain Activity: These IT workers operate within a broader ecosystem that is increasingly moving its operations “on-chain”. This includes using the Web3 stack for command-and-control (C2) and asset monetisation via tokenized marketplaces, which provides their operations with unprecedented resilience against traditional law enforcement takedowns.

 A North Korean IT worker is like a “Trojan Horse” employee; they don’t just enter an organisation to do the job they were hired for, but to act as a silent gateway, allowing an external adversary to walk through the front door, raid the treasury, and copy the blueprints while everyone thinks they are just another remote colleague.