ATTACKERS DON'T NEED APPROVAL
When AI Security Tools Become the New Attack Surface
Anthropic just announced they found 500+ previously undetected vulnerabilities in production open-source codebases. Everyone's celebrating this as a security breakthrough.
I see something different.
Those 500+ vulnerabilities didn't just appear. They've been sitting in production code for years, maybe decades. We've been running fundamentally insecure software this entire time, and we only know about it now because AI can see what humans couldn't.
The uncomfortable question: if AI can find them now, how long before attackers figure out the same tricks?
The Asymmetric Race We're Already Losing
Here's the paradox that keeps me up at night.
Anthropic's Claude Code Security requires human review before applying any fixes. That's positioned as a safety feature. But attackers using similar AI models don't need safety features. They don't need human approval. They can operate at machine speed while we're stuck waiting for compliance checks.
Defenders need multiple layers, human review, compliance processes. Attackers just need one successful exploit.
We've always been in an asymmetric race with bad actors. Security and compliance aren't always top of mind for product managers or engineers. But now we're in a situation where both sides have access to the same AI capabilities, and only one side is operating with guardrails.
Anthropic acknowledges this openly: "the same capabilities that help defenders find and fix vulnerabilities could help attackers exploit them." Executive Order 14110 defines dual-use foundation models as those "enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation."
We're not preparing for a future threat. We're responding to a present reality where unchecked bad actors are likely already using AI to exploit vulnerabilities in existing codebases, frameworks, and platforms.
The Shift Left That Might Slow You Down
The industry narrative says we're moving toward a "shift left" approach. Security scanning and compliance implemented immediately at development, not post-QA or post-deployment.
That sounds great in theory.
In practice, it means developers now deal with security findings immediately, not later. Every commit potentially triggers AI-discovered vulnerabilities that need human review before moving forward. What happens to development velocity when your AI security companion flags issues in real-time?
The vision is that in a mature AI-SDLC process, AI will fix code in real-time. But Anthropic's current tool explicitly requires human review for fixes. That gap between vision and reality creates a window where attackers operating without guardrails can move faster than defenders constrained by compliance.
The downside: only 12% of GitHub organizations enforce CI/CD security settings. Despite the promise of automated security at development time, integrating security early can slow production as automated tools take time to learn, configure, and run.
Vendor Lock-In at the Knowledge Layer
We have vendor lock-in across the industry. That's not news.
But this feels qualitatively different.
Traditional vendor lock-in meant switching costs and integration pain. This is lock-in at the knowledge layer. The AI holds understanding of your codebase's vulnerabilities that your team might not fully grasp.
When Anthropic or any AI vendor changes terms, pricing, or access, you're not just switching tools. You're potentially losing institutional security intelligence.
Anthropic just updated its terms to prohibit third-party harnesses for Claude subscriptions. That's a business decision to protect their subscription model. But it highlights a broader concern: organizations risk outsourcing not just infrastructure, but intelligence itself.
By 2025, 89% of enterprise AI usage was invisible to organizations. More than half of all AI failures originate from third-party tools. You build an AI-powered engine, but you don't own the keys.
This is a significant risk.
But here's the pragmatic calculation: the downside of a breach is considerably more significant than vendor lock-in. Organizations will adopt AI security tools and accept vendor dependency because the alternative is worse.
When Security Becomes Table Stakes
If every organization makes that same calculation and adopts AI security tools, what actually differentiates one company's security posture from another's?
The answer: nothing.
The best security isn't an advantage. It's table stakes.
When major cybersecurity vendors saw immediate stock declines following Anthropic's announcement (CrowdStrike dropped 7.8%, Palo Alto Networks declined 6.4%, Cloudflare slid 5.9%), the market was signaling something important: AI security is becoming commoditized.
Once every security vendor offers AI-powered scanning, the competitive advantage disappears. You're left with the implementation gap.
The real bottleneck that separates organizations who will thrive from those who won't isn't detection capability. It's organizational velocity in remediation. It's cultural acceptance of AI suggestions. It's the ability to respond at the speed AI enables discovery.
A recent study showed that over 70% of vulnerabilities shared on bug bounty platforms were caused by simple errors like failure to implement URL checks. If basic mistakes constitute 70% of vulnerabilities, and AI can now find complex ones at scale, the remediation backlog doesn't shrink.
It explodes.
The Real Risk Nobody's Talking About
Companies that will thrive in this new paradigm will focus on their customers and the market's needs.
That might sound like a pivot away from the technology entirely. It is.
As AI security becomes commoditized and table stakes, the winners won't be those with the best security tools. They'll be those who aren't distracted by security theater and can actually focus on delivering value.
The real risk is that organizations become so consumed with implementing AI security layers that they lose sight of why they're building software in the first place.
Anthropic's tool could usher in automated OWASP compliance at the time of coding. That's significant. But it's also just one line of defense among many. We have various solutions that provide various layers of security at different intervals.
AI will see things at a larger scale that you don't. An engineer's purview in the moment could be limited considering the likely size of a codebase. That's the value proposition.
But when the tool becomes the expertise, when organizations depend on AI for security at scale, you're not just adopting a tool. You're accepting that human developers will never fully understand the security implications of their own code anymore.
We're choosing which risk we're more comfortable with: adopt AI security tools and accept vendor dependency at the knowledge layer, or fall behind competitors and attackers who are already using AI.
There's no realistic middle path.
The question isn't whether AI can find vulnerabilities better than humans. It's whether you can respond fast enough when both sides have the same tools.