
As AI-native applications multiply, organizations are losing sight of where and how AI is being used, creating new blind spots and security risks
SAN FRANCISCO, Nov. 12, 2025 /PRNewswire/ -- Harness, the AI DevOps Platform™ company, today released new research exposing an AI visibility crisis across modern enterprises. As organizations race to embed AI into every product and workflow, most have lost sight of where and how those AI components are actually being used – creating a new class of security vulnerabilities that traditional tools were not built to secure.
According to The State of AI-Native Application Security 2025, 75% of security practitioners say shadow AI will eclipse the risks once caused by shadow IT, with the majority of organizations already reporting security incidents tied to the use of AI capabilities. The findings highlight a deeper shift: shadow AI isn't just a new risk category, it's a symptom of lost visibility and control as AI-native applications rapidly multiply across organizations. Security teams are struggling to monitor their tool sprawl, and communication breakdowns between development and security teams exacerbate the problem even further.
How AI Is Expanding the Enterprise Attack Surface
As enterprises rush to adopt AI, security teams are struggling to understand and govern what is being built.
Based on responses from 500 security practitioners and decision-makers across the United States, the United Kingdom, France, and Germany, the study shows that:
- Shadow AI is the new shadow IT: 62% say they have no visibility into where LLMs are in use across their organization.
- AI sprawl is outpacing control: 74% of respondents say AI sprawl will "blow API sprawl out of the water" when it comes to risk.
- The threat landscape is evolving: 82% of respondents believe AI-native applications are the new frontier for cybercriminals, and 63% consider those apps more vulnerable than traditional IT applications.
- AI-native apps are already under attack: Enterprises have already experienced incidents involving LLM prompt injection (76%), vulnerable LLM code (66%), and LLM jailbreaking (65%).
- And developers aren't owning AI security: 62% say developers aren't taking responsibility for securing AI-native apps, and only 43% say developers build with security from the start.
"Shadow AI has become the new enterprise blind spot," said Adam Arellano, Field CTO at Harness. "Traditional security tools were built for static code and predictable systems — not for adaptive, learning models that evolve daily. Security has to live across the entire software lifecycle — before, during, and after code — so teams can move fast without losing visibility or control."
The AI Security Divide
AI adoption continues to surge with almost two-thirds (61%) of new enterprise applications now designed with AI components in mind. The challenge is that most teams have not dedicated the time for security training or provided the oversight required to secure these apps effectively.
- Developers lack the time and training: 62% say their developers don't have time to implement comprehensive AI-native security, and 62% say they lack the necessary expertise.
- Speed and security are mismatched: 75% report that AI applications evolve faster than security can keep up.
- Collaboration breakdowns are widening the gap: Only 34% of developers notify security before starting AI projects, and just 53% before going live.
- Perception remains a barrier: 74% of security leaders say developers view security as a blocker to AI innovation.
"AI has redrawn the enterprise attack surface overnight," Arellano added. "Where teams once monitored code and APIs, they now must secure model behavior, training data, and AI-generated connections. The only way forward is for security and development to operate as one — embedding governance directly into the software delivery process."
The Path Forward: Building AI-Native Security Resilience
Without immediate visibility into where AI is being used — and by which teams — organizations face an accelerating cycle of risk. Untracked models, exposed APIs, and unsanctioned AI tools are becoming the new shadow infrastructure, making it nearly impossible to enforce policy or detect compromise.
To build AI-native security resilience, Harness recommends that enterprises:
- Build security in from the start through shared governance between security and development.
- Discover all new AI components as they appear and ensure they are monitored and logged.
- Gain real-time visibility into AI components, APIs, and model outputs to detect anomalies early.
- Dynamically test applications against AI-specific threats to identify security risks prior to production.
- Protect AI-native applications in production to reduce risk of sensitive data disclosure.
To learn more, download the full State of AI-Native Application Security 2025 report here: https://www.harness.io/the-state-of-ai-native-application-security
About The Research
This report is based on a survey of 500 security practitioners and decision makers responsible for securing AI-native applications, commissioned by Harness and conducted by independent research firm Sapio Research. The sample included 200 respondents in the United States, and 100 each in the UK, Germany, and France.
About Harness
Harness is the AI DevOps Platform™ company, enabling engineering teams to build, test, and deliver software faster and more securely. Powered by Harness AI and the Software Delivery Knowledge Graph, the platform brings intelligent automation to every stage of the software delivery lifecycle after code—removing toil and freeing developers from manual, repetitive work. Companies like United Airlines, Citibank, and Choice Hotels use Harness to accelerate releases by up to 75%, cut cloud costs by 60%, and achieve 10× efficiency across DevOps. Based in San Francisco, Harness is backed by Menlo Ventures, IVP, Unusual Ventures, and Citi Ventures.
SOURCE Harness
Share this article