
Attackers are not waiting for defenders to figure out their AI strategy. They’re already using these capabilities operationally. The phishing campaigns are already more convincing. The attack cycles are already faster. The organizations that treat AI adoption in their security programs as something to evaluate over the next few planning cycles are essentially choosing to defend against 2025 attacks with 2022 tools.
This is one of the more concrete ways AI can function as a defensive tool. AI-driven triage, where models are trained to distinguish high-fidelity signals from background noise and prioritize accordingly, can give analysts their time back. Not by replacing human judgement on complex decisions, but by handling the obvious low-value work so analysts aren’t buried in it.
Consider what a modern phishing campaign looks like. It’s no longer a mass blast of generic messages with obvious grammatical errors. AI-generated spear phishing pulls in publicly available information about a target: their role, their colleagues, recent company announcements, their communication style based on public posts, and constructs something that reads as entirely plausible. The tell-tale signs security training programs have drilled into employees for years are disappearing.
Deepfake voice and video are compounding this. Audio deepfakes have already been used to impersonate executives in real fraud cases, instructing finance employees to wire funds. These attacks don’t exploit software vulnerabilities. They exploit human trust, which is far harder to patch.
The attack surface AI opened up
At the same time, AI is being used to accelerate the technical side of attacks. Automated vulnerability discovery, faster malware iteration, code that can rewrite itself to evade signature-based detection. The gap between a vulnerability being discovered and it being exploited in the wild has been shrinking for years. AI is shrinking it further.
Social engineering was already the most reliable attack vector before AI. People are predictable. Urgency, authority, and fear reliably override judgment. What AI does is take that predictability and weaponize it at a scale that would have required a small army of human operators just a few years ago.
Most security programs are built around pre-deployment controls. Scan the code before it ships. Review the configuration before it goes live. Assess the risk before the system is exposed. These things matter. They also aren’t sufficient, and AI-driven environments make that insufficiency more acute.
Closing that gap requires a different kind of instrumentation. Not scanning before deployment, but monitoring during execution. Not rules written in advance, but behavioral baselines that can detect deviation as it’s happening. Not post-incident forensics, but detection that can interrupt an attack while it’s underway.
The alert fatigue problem nobody wants to talk about honestly
When workloads are ephemeral, when AI systems are generating and executing code autonomously, when infrastructure is spinning up and tearing down faster than any human approval process can track, the attack surface is being created and destroyed in real time. Static controls built for a slower, more predictable world don’t map onto that.
Rinki Sethi, a security leader with experience across organizations like Twitter, IBM, and Rubrik, framed it well in a recent conversation: “The challenge today is no longer just about visibility. Now, it’s about cutting through the noise. Security teams are drowning in alerts, and the real risk is getting lost in the volume.”
There’s a specific blind spot that AI makes more dangerous and that security programs have been slow to address: the gap between how systems are configured and how they actually behave when they’re running.
Security teams are drowning in alerts. Not dozens. Thousands, sometimes tens of thousands per day. The majority are false positives. Analysts spend enormous amounts of time triaging noise, which means the real signals get delayed, deprioritized, or missed entirely. The irony is that more visibility, more sensors, more detection coverage has in many cases made defenders less effective at responding to actual threats. The volume becomes the problem.
By Gary Bernstein
Where AI actually helps defenders
None of this is magic. The models need to be trained on relevant data. They need to be tuned to the organization’s specific environment. They produce false positives too. But the ceiling on what’s achievable is meaningfully higher than what rule-based systems can do.
That’s a harder problem than the vendor pitch decks make it look. But it’s also a solvable one, for organizations willing to approach it as a genuine operational challenge rather than a technology procurement decision.
Here’s a reality most security vendors won’t put in their marketing materials: the tools meant to protect organizations are contributing to one of the biggest problems those organizations face.
The most common mistake is treating AI as a plug-in rather than a rethink. Organizations buy an AI-powered tool, bolt it onto an existing security program, and expect the outcomes to change. They usually don’t, because the underlying processes haven’t changed. Analysts are still triaging alerts the same way. Incident response still follows the same playbook. The AI tool generates its own output that someone has to review, and now there’s more to review.
This is the core tension security leaders are sitting with right now. And most organizations are not positioned to deal with it.
The runtime gap most organizations are ignoring
Security leaders who are paying attention to this know that the organizations falling behind aren’t just accepting more risk. They’re accepting more risk against adversaries who are moving faster. That math gets worse over time.
The organizations making real progress are the ones treating AI adoption as an opportunity to redesign workflows, not just to accelerate existing ones. That means asking different questions. Which parts of our detection and response process are bottlenecked by human time and attention? Where are we making decisions that could be automated without losing meaningful quality? What would it look like to build a security program around AI capabilities rather than retrofitting AI into a program built around human analysts?
The same technology defenders are counting on is already being used against them. Here’s where that leaves us.
That’s not a novel observation in isolation. Plenty of practitioners have said similar things. What makes it worth examining is the implication. If visibility is no longer the bottleneck, then adding more monitoring tools doesn’t solve the problem. It might make it worse. What organizations actually need is better signal, not more of it.
The defensive applications of AI in cybersecurity are real, though the marketing around them has outpaced the operational reality at most organizations. Let’s be honest about what works.
What organizations are getting wrong
Security teams have always been playing catch-up. New attack surfaces emerge, defenders scramble to cover them, and the cycle repeats. That dynamic is as old as the profession. What’s different now is the velocity. Artificial intelligence hasn’t just accelerated the threat landscape. It has fundamentally changed who can participate in it.
The practical question isn’t whether to integrate AI into security operations. That ship has sailed. The question is how to do it in a way that actually changes outcomes: getting defenders out of the alert triage treadmill, giving them real visibility into runtime behavior, and letting them respond at something closer to the speed at which attacks unfold.
It also means being honest about what AI can’t do. It can’t make good decisions with bad data. It can’t compensate for a security program that doesn’t have basic hygiene in place. It can’t replace the judgment that comes from experienced practitioners who understand the specific risks and priorities of a particular organization. The technology is a force multiplier. What it multiplies still matters.
The competitive reality
What attackers exploit is runtime behavior. They’re not interested in your configuration management database. They’re interested in what your systems are actually doing when they’re running: what processes are executing, what network connections are being made, what data is being accessed by which identities. That’s where the opportunity exists for them, and it’s where defenders have historically had the least visibility.
Behavioral anomaly detection is genuinely better with machine learning than with rule-based systems. Rules require someone to anticipate the attack pattern in advance and write a signature for it. Machine learning can identify deviations from established baselines that no human analyst would have thought to write a rule for. An account that normally logs in from Chicago at 9 AM suddenly accessing sensitive data at 3 AM from a new device using an API endpoint it has never touched: that’s the kind of pattern that gets lost in a sea of alerts without automated behavioral analysis to surface it.
Incident response is another area where automation driven by AI is showing genuine value. The investigation steps that follow an alert, pulling related logs, mapping affected systems, cross-referencing threat intelligence, building a timeline of events, are time-consuming and largely mechanical. AI can execute them faster and more comprehensively than a human analyst, freeing that analyst to focus on the judgment calls that actually require experience.
Threat intelligence correlation is a third. The volume of threat intelligence data available is enormous and growing. No team can manually stay current with it. AI can continuously ingest and correlate that data against an organization’s specific environment, flagging when a newly documented attacker technique or infrastructure overlap becomes relevant to them specifically.
You no longer need to be a sophisticated threat actor to run a sophisticated attack. AI lowers the floor on technical expertise dramatically. A reasonably motivated attacker can now generate convincing phishing content at scale, automate credential stuffing across thousands of targets, and adapt social engineering scripts on the fly based on what’s working. The cost of mounting a targeted campaign has dropped by an order of magnitude. The cost of defending against one hasn’t.





