AI Market Logo
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
Agentic AI Turns Enterprise Cybersecurity Into Machine vs. Machine Battle
cybersecurity

Agentic AI Turns Enterprise Cybersecurity Into Machine vs. Machine Battle

Explore how agentic AI is transforming cybersecurity, shifting from human intervention to autonomous threat detection and response.

July 22, 2025
5 min read
PYMNTS

Explore how agentic AI is transforming cybersecurity, shifting from human intervention to autonomous threat detection and response.

Agentic AI Turns Enterprise Cybersecurity Into Machine vs. Machine Battle

By PYMNTS | July 22, 2025 For years, cybersecurity has been defined by a simple but dangerous gap: the time between when a vulnerability is discovered and when it’s patched. Fraudsters have traditionally exploited that window, often with catastrophic results. Now, Google is showing that the arms race may no longer be moving at human speed, potentially signaling an end to the era of overloaded analysts chasing alerts and engineers patching software after the fact. The tech giant unveiled several updates around agentic artificial intelligence powered cybersecurity. Google is developing autonomous systems that can detect, decide, and respond to threats in real time — often without human intervention.
"Our AI agent Big Sleep helped us detect and foil an imminent exploit," Sundar Pichai, CEO of Google, posted on social platform X. "We believe this is a first for an AI agent — definitely not the last — giving cybersecurity defenders new tools to stop threats before they’re widespread."

From Threat Reaction to Autonomous Prevention

Historically, zero-day vulnerabilities — unknown security flaws in software or hardware — are discovered by adversaries first, exploited quietly, and later disclosed after damage has occurred. Big Sleep reversed that pattern. No alerts, no tip-offs — just AI running autonomously and flagging a high-risk issue before anyone else even knew it existed. For CISOs, this means a new category of tools is emerging. They’re AI-first threat prevention platforms that don’t wait for alerts but seek out weak points in code, configurations, or behavior, and they take defensive action automatically. For CFOs, it signals a change in cybersecurity economics. Prevention at this scale is potentially cheaper and more scalable than the human-powered models of the past. But that’s only if the AI is accurate and accountable.

Agentic AI and Risk Accountability at the Edge of the Front Line

With power comes responsibility, and in cybersecurity, that translates to risk ownership. Agentic AI systems, by definition, act independently. That autonomy introduces new challenges for governance and compliance. Who’s responsible if an AI mistakenly flags a critical system and shuts it down? What happens if the AI fails to detect a breach? This isn’t just a technical upgrade; it’s a governance revolution. You’ve got to treat these AI agents as non-human actors with unique identities in your system. You need audit logs, human-readable reasoning, and forensic replay. The emergence of agentic AI solutions for cybersecurity also has enterprise composition implications. As workforces remain hybrid and attack surfaces widen, endpoint security is only as good as its weakest device. Bringing autonomous protection to the edge — phones, browsers, apps — may no longer be optional. In that light, cybersecurity investments must now answer a new question: How much decision-making power are we ready to give our machines? The adversaries aren’t waiting, and the AI agents aren’t slowing down. For WEX Chief Digital Officer Karen Stroup, the best approach to deploying agentic AI involves a disciplined strategy of experimentation. "If you’re going to experiment with agentic AI or any type of AI solutions, you want to focus on two things: the areas where you’re most likely to have success and whether there will be a good return on that investment." For all PYMNTS AI and digital transformation coverage, subscribe to the daily AI and Digital Transformation Newsletters. Originally published at PYMNTS on July 22, 2025.