
New research reveals organizations are placing trust in AI-driven defenses before
rigorously testing them
ORLANDO, Fla., May 6, 2026 /PRNewswire/ -- SimSpace, the AI Proving Grounds for cybersecurity, today announced its new research report, 'The State of Agentic Cybersecurity,' showing that confidence in AI-driven security is running ahead of measurable performance. According to the research, 78% of security leaders report high confidence in their defenses. Despite that confidence, teams are scoring as low as 30% in security exercises, according to SimSpace's Defensive Security Readiness (DSR) data, a proprietary metric for measuring readiness through training and testing. This research provides a framework for CISOs and security operations leaders to validate agentic AI performance in mission-critical environments.
This confidence and proof gap is emerging as AI becomes more embedded in day-to-day security operations. The report highlights 73% of organizations are already using AI agents in their SOC at a moderate to high level, yet testing practices have not kept pace. Only 29% of organizations conduct continuous simulation testing, while 44% test biannually or less or not at all. These findings suggest that many teams are putting AI into use before they fully understand how it performs in realistic conditions.
"Assistive AI agents are mostly what's being deployed to production today; they're not fully autonomous agents," said Lee Rossey, CTO and Co-Founder of SimSpace. "It's noteworthy, though, that there's not rigorous testing of those agents prior to deployment; enterprise leaders seem to be counting on the humans in the loop to spot and correct any erratic behavior from their AI agents. Said another way, enterprise executives have not yet focused on and/or figured out how to develop trust in agentic AI before deploying it to production."
Combining recent survey data from global CISOs and senior leaders with never-before-released data from SimSpace environments, the research sought to understand how security leaders are training human operators and testing AI agents to work together in production so they can trust their expected performance.
"Autonomous agentic solutions are what's coming next, and enterprise executives are going to want to have complete trust in them to perform appropriately in a wide variety of situations before they get deployed to production," continued Rossey. "They're going to want to develop that trust through rigorous testing in a production-like environment where it's ok to fail. Eventually, they are going to want to trust those AI systems to act on their behalf in production without depending on human operators to monitor their every action closely. What's interesting in this research is that it shows that they have not yet begun rigorous testing to develop trust in those AI agents as a natural part of their deployment process."
Key findings from the report include:
- 73% of organizations are already using AI agents in their SOC
- 78% of security leaders report high confidence in their defenses, yet one-off tabletop exercises and certification courses remain the most-used training methodologies for security teams, which are no longer sufficient to defend against AI-driven threats.
- 44% test biannually or less or not at all.
- While AI augmentation in the SOC should improve performance and efficiency, deployment of AI tools initially reduces performance by 10-20% before improvements are realized through repeated testing.
- Each full simulation exercise drives about a 3-5 percentage point improvement in Defensive Security Readiness scores, with the biggest gains occurring in the early cycles.
- Only 29% of organizations say they conduct security testing with cyber simulations—realistic scenarios that mirror real-world attacks.
- Teams that run frequent, realistic simulations improve their Defensive Security Readiness (DSR) scores by 20-50% per event and reach high performance levels within four to six iterations. Teams that test infrequently plateau at significantly lower levels.
Implementation Pathways for Agentic AI Security
The report outlines a clear path forward for security leaders:
- Shift from episodic to continuous testing: AI operates continuously, so validation must do the same.
- Measure what truly matters: Focus on detection success, response accuracy, and decision quality, rather than just alerts or activity.
- Prepare for the learning curve: Anticipate early disruptions and optimize processes through iteration.
- Establish AI Proving Grounds: Create realistic environments where AI agents and human teams can be trained and tested together under real-world conditions.
The full report is available at https://simspace.com/state-of-agentic-cybersecurity
To learn more about the AI Proving Grounds: https://simspace.com/ai-proving-grounds/
Frequently Asked Questions
Q: What is the confidence gap in AI security?
A: The confidence gap refers to the disparity between a security leader's perceived protection (78% confidence) and their actual performance in real-world simulations (30% readiness score).
Q: What is the SimSpace Defensive Security Readiness (DSR) metric?
A: DSR is a proprietary, quantitative metric that measures a security team's ability to defend against realistic cyber threats through evidence-based training and continuous simulation.
Q: How does simulation testing improve agentic AI hardware/software?
A: Simulation testing allows organizations to identify "erratic behavior" in AI agents before they reach production. Each simulation cycle typically drives a 3-5 percentage point improvement in DSR.
Q: What is an AI Proving Ground?
A: An AI Proving Ground is a high-fidelity, virtualized environment (a cyber range) where AI agents and human operators are tested together against realistic adversary behavior to build trust and operational reliability.
About SimSpace:
SimSpace is the realistic cyber simulation infrastructure for continuously training, testing, and validating AI agents. By enabling AI agents to work together with human operators in an intelligent cyber range, SimSpace serves as the AI Proving Grounds for elite cyber teams. To learn how SimSpace helps cyber teams outperform and outsmart adversaries in any terrain, visit: www.SimSpace.com.
PR contact:
Amy Rice
[email protected]
508-978-6635
SOURCE SimSpace Corporation
Share this article