Frontline security predictions 2026: The battle for reality and control in a world of agentic AI
A look at how agentic AI will transform cyber threats and defenses in 2026
Takeaways
- Ways agentic AI is shaping the evolution of the threat landscape
- How organizations can defend against agentic AI threats
- Strategies for protecting agentic AI implementations
- How agentic AI will help enhance security
The power and potential of agentic AI — adaptive, automated and independent — dominated security conversations during 2025. Barracuda asked four colleagues leading cyberthreat and security areas around the world, what they expect from agentic AI in 2026 and what this means for cybersecurity.
Between them, they anticipate a future where:
- ·An agentic AI operator will run cyberattacks end-to-end, gathering what it needs, crafting convincing lures, trying a path, watching how the target’s protection or defense reacts, then quietly shifting tactics and timing until it gets what it wants. Defenders should expect new attack types and tactics that don’t look like anything they’ve seen and may be hard to explain after the fact. (Yaz Bekkar)
- Agentic AI will analyze vast amounts of data to identify vulnerabilities in real time, allowing attackers to automate how they exploit weaknesses more effectively. (Jesus Cordero-Guzman)
- AI will hold conversations with humans, both voice and messaging, at a level that makes it nearly impossible for people to spot they’re not conversing with an actual person. (Eric Russo)
- Multiple AI agents will work in tandem on actions to achieve a particular objective, with minimal or no human supervision. This opens the possibility of hijacking or poisoning agent-to-agent interactions, using attacker-controlled information to manipulate the coordinated actions. The lack of humans in the loop could delay detection and mitigation. (Rohit Aradhya)
- Cybersecurity leadership will be about leading AI agents as well as people. The new generation of managers will need to learn how to include AI agents in their teams and manage them to enhance productivity and decision-making. (Jesus Cordero-Guzman)
What does agentic AI mean for the evolution of the threat landscape in 2026 and beyond?
Yaz Bekkar, principal consulting architect XDR, EMEA. By next year, attacks won’t just use AI, the AI will behave like an independent operator, making real-time choices to reach the attack goal. We’re already seeing AI automate chunks of the kill chain such as reconnaissance, phishing and basic defense evasion. I believe that the shift in 2026 will be toward systems that plan steps, learn from defenses in real time, and reroute without human steering.
The AI operator will run the show end-to-end, gathering what it needs, crafting convincing lures, trying a path, watching how your protection or defenses react, then quietly shifting tactics and timing until it gets what it wants. These advanced hacking tools will feel like a coordinated brain that strings steps together, learns from each obstacle and blends into normal activity.
Defenders should expect new attack types and tactics that don’t look like anything they’ve seen and may be hard to explain after the fact. The attack surface keeps expanding, creating both known and unknown gaps, and zero-day exploitation will rise.
Eric Russo, director, SOC defensive security. AI has advanced so rapidly that it can hold conversations with humans, both voice and messaging, at a level that is nearly impossible for people to spot they’re not conversing with an actual person. Android users, for example, can now leverage Google's AI capabilities to answer phone calls from unknown numbers, have it ask questions, have a conversation, and determine if the call is legitimate. The technology is extremely impressive — but now imagine a threat actor using that same technology.
Agentic AI could be used to transform traditional attack approaches. For example, in a social engineering attack against a member of the finance team to obtain corporate banking account numbers. Or there could be more intricate objectives such as using deepfake impersonation to deceive a help desk admin to reset MFA, giving a threat actor access to IT systems and enabling a more elaborate attack to unfold.
Jesus Cordero-Guzman, director, solution architects for application, network security and XDR, EMEA. Autonomous AI is already a reality, and as a threat it will evolve quickly. It can analyze vast amounts of data to identify vulnerabilities in real time, allowing attackers to automate how they exploit weaknesses more effectively. Xanthorox, a purpose-built, self-hosted artificial intelligence platform designed from the ground up to facilitate cybercrime, appeared in early 2025. There are also HexStrike and the even more advanced Venice.AI. Agentic AI could be weaponized for automated phishing attacks, real-time monitoring of defenses, and even the ability to crack CAPTCHAs to gain unauthorized access.
Rohit Aradhya, VP and managing director, app security engineering. The evolution of agentic AI will lead to a rise in adaptive polymorphic malware — malware that can analyze the victim’s environment and security tools and autonomously rewrite or alter its own code and behavior to bypass signature-based and behavioral defenses in real time.
We’ll see multiple agents working in tandem on actions to achieve a particular objective, with minimal or no human supervision. This opens the possibility of hijacking or poisoning agent-to-agent interactions, using attacker-controlled information to manipulate the coordinated actions. The lack of humans in the loop could delay detection and mitigation.
We will also see a significant rise in the misuse of public-facing application programming interfaces (APIs), API gateways, agentic service APIs, and chatbot-based user interfaces. It will become critical for API lifecyle management to cope with dynamic API handling as agentic tools dynamically create and destroy API interfaces between agents and with users to provide and consume services.
How can organizations defend against agentic-AI based threats and protect their own agentic AI systems?
Yaz Bekkar. Most companies won’t see agentic AI coming. It’s a bit like Schrödinger’s cat: The IT environment will feel both “secure” and “compromised” at the same time, with no clear clues for defenders.
The only realistic defense is behavior-driven detection powered by AI that’s tuned to your environment, not a tool bought and left on default settings while you hope it works. Pick the right solution, instrument it well and keep refining it. Real resilience comes from the mix: strong technology plus high-caliber expertize operating it day in, day out.
Organizations need to move beyond siloed point tools to a platform with real end-to-end visibility. This includes identities, endpoints, SaaS, cloud, email, and network traffic. You also need to lean on behavior-driven analytics AI that learns what normal looks like for you and can detect anomalies and zero-day-style patterns without waiting for signatures. Keep humans in the loop.
Jesus Cordero-Guzman. Traditional defense mechanisms may struggle to keep pace with the speed and adaptability of AI-driven threats. Organizations will need to invest in advanced security solutions that incorporate AI for threat detection and response, ensuring they can counteract these evolving threats effectively.
What should organizations do to protect their own agentic AI implementations?
Jesus Cordero-Guzman. Cybersecurity leadership will be about leading AI agents as well as people. The new-generation managers need to learn how to include AI agents as part of their teams and manage them to enhance productivity and decision-making.
We will need to learn more about technical proficiency, natural language processing and data analytics to give AI agents the ‘character’ needed for the organizational business model. This is not just an operational but an ethical challenge. We need to ensure that integrated AI agents are used responsibly and that the decisions made by these systems align with organizational values and societal norms. As AI continues to evolve, so too will the role of leadership in navigating this dynamic landscape in cybersecurity.
Rohit Aradhya. As organizations start to implement agentic AI, a range of AI-specific security controls will be needed. These include robust identity and access management (IAM) for AI agents. Every agent should be treated as a standalone entity with associated users, groups and resource access privileges. Organizations will need to extend their zero-trust framework to AI agents and tools; to verify and validate every request and action an agent attempts, regardless of its previous behavior. They will need to increase the focus on monitoring the operational behavior of systems, so that any deviations are detected quickly. Agent-to-agent communications need to be secured, properly authenticated, encrypted and logged for traceability and explainability and to protect against the kind of attacks designed to poison communications. Last, but not least, organizations need to ensure they understand and comply with standards like the NIST AI Risk Management Framework.
Beyond the risk, how will agentic AI help to enhance security?
Eric Russo. Agentic AI will be a powerful tool for SOC teams. It presents an opportunity to reduce the burden of reactive work SOC teams are tasked with, increasing resource capacity for more proactive security initiatives like threat research, detection engineering, etc. For instance, many tasks related to administrative management of security systems can be handled by agentic AI. Now take the time a SOC analyst or engineer would have traditionally needed to complete that task manually and instead let that individual focus on an emerging threat.
Another use case for AI in security defense is expanding upon machine learning in threat detection. Traditional detection mechanisms are more static in nature. But building ML algorithms that can establish baselines for user behavior, network traffic, etc. and then define criteria for anomaly scoring to let the systems decide when there should be a security alert, is a very effective approach to detecting sophisticated threat actors. This approach also has the benefit of yielding higher confidence alerting, which reduces alert fatigue and false positives.
Rapport 2025 sur les ransomwares
Principales conclusions concernant l’expérience et l’impact des ransomwares sur les organisations du monde entier
S’abonner au blog de Barracuda.
Inscrivez-vous pour recevoir des informations sur les menaces, des commentaires sur le secteur et bien plus encore.
Sécurité des vulnérabilités gérée : correction plus rapide, risques réduits, conformité simplifiée
Découvrez à quel point il peut être facile de trouver les vulnérabilités que les cybercriminels cherchent à exploiter