AI Market Logo
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
Mistaken Identity? AI Agent Oversight Key To Success
identity-access-management

Mistaken Identity? AI Agent Oversight Key To Success

AI agents promise huge productivity gains but require robust identity and access oversight to prevent major security risks.

August 6, 2025
5 min read
Kyle Alspach

AI agents promise huge productivity gains but require robust identity and access oversight to prevent major security risks.

Mistaken Identity? AI Agent Oversight Key To Success

By Kyle Alspach Granting AI increased autonomy poses critical security risks, opening the door for solution providers to tackle access privileges and more. If the tech industry gets things right with AI agents, it could soon be possible to grant significant autonomy to entire teams of virtual assistants to go out and achieve “very complex goals” all on their own, according to Accenture’s Damon McDougald. “That’s where everyone has the vision and desire for agents to go,” said McDougald, global cyber protection lead at Dublin, Ireland-based Accenture, No. 1 on CRN’s 2025 Solution Provider 500. However, if industry efforts come up short—especially when it comes to security—the AI agent revolution and its promise of unprecedented productivity gains could hit major roadblocks, cybersecurity experts and industry executives told CRN. This is especially the case when it comes to security for identities and management of access privileges, already a notoriously difficult area for organizations when it comes to managing human workers. The issue could be exponentially riskier when it comes to autonomous AI workers. In other words, to truly turn the industry vision for AI agents into reality, identity and access considerations should be paramount, experts said.
“You need to be thinking from an identity standpoint of how do [agents] get access to things at different times for different time lengths?” McDougald said. “That’s different than what we usually do today for humans.
“[However], if we get agents right, we’ll see a scale [of productivity improvements] that we haven’t seen before,” he said. “At some point in time, there will potentially be billions of agents running around the internet. There will be a marketplace of agents, and we’re only limited by our compute and imagination on how agents will work. And so the identity tools need to fit in that reality today.”
The potential risks from insecure or misconfigured AI agents are not hard to envision, particularly from an identity and access perspective. The entire purpose of agents is to connect to many different systems and data sources to autonomously accomplish tasks. But if such access is not actually supposed to be authorized, a data breach is the probable result.
“[With AI agents], the need for the access to data is going to typically be greater,” said Matt Shufeldt, chief solutions officer at San Diego-based systems integrator Evotek, No. 92 on CRN’s 2025 Solution Provider 500. “And because of that, you need to solve for those data management issues more quickly.”
Exacerbating the issue is the fact that because agents are charged with executing tasks without constant human oversight, breaches could go entirely unnoticed for a period of time. The risks may sound familiar—akin to those that the industry grappled with from the arrival of GenAI—but are magnified several times over when it comes to agents, experts said.
“Whatever we thought were the problems in the GenAI or LLM world, agentic is multiple times that,” said Ankur Shah, cofounder and CEO at Straiker, a Sunnyvale, Calif.-based startup focused on AI security.
“Agents are basically LLMs that can reason, make decisions and take action on [those decisions]. They’re chained together, so they are nothing but GenAI on steroids,” Shah said. “And so you also have to think about putting your security on steroids in the agentic world.”
Solution providers that can help organizations deal with the challenges around agentic AI will find no shortage of opportunity going forward, solution provider executives told CRN. The bottom line is that the idea of granting AI greater autonomy than ever before raises new security questions that most organizations are not equipped to answer on their own, executives said.
“That’s where I see us just having a ton of opportunity,” said Kevin Lynch, CEO of Denver-based Optiv, No. 28 on CRN’s 2025 Solution Provider 500.
“I think it starts in our advisory motion by looking at helping the client to think holistically [about agentic AI],” Lynch said. “We’re helping that client to think through the implications of their operational choices.”
Experts told CRN that oversight of identity and access issues will be central to any strategy around enabling AI agents. Whereas traditional identities have been tied to humans, the larger scale on which agents will operate means major changes ahead for management, governance and authorization of identities, according to Alex Bovee, co-founder and CEO of ConductorOne, an identity governance startup focused on agentic AI with headquarters in Portland, Ore., and San Francisco.
“It’s just completely different patterns and paradigms for how you would manage those identities at scale,” Bovee said.
Indeed, the maxim that security is only possible when you first have visibility seems truer than ever when it comes to agentic technologies, according to vendor and solution provider executives. Real-time oversight around the actions taken by AI agents will be pivotal, which will likely mean adding an extra layer of enforcement on top of AI agents that will ensure they are not going astray from what is expected, Bovee said.
“[With AI agents], now you have not just a user visibility and transparency challenge,” said Ben Prescott, head of AI solutions at Irvine, Calif.-based Trace3, No. 34 on CRN’s 2025 Solution Provider 500. “Now you [have to know] what is the agentic solution itself actually planning and executing? And how do we understand what the right output is that is actually generating within that agentic workflow?”
One identity security startup working with Trace3 is Descope, which is seeking to become a go-to agentic identity provider for the coming era of AI agents, according to Descope co-founder Rishi Bhargava. Los Altos, Calif.-based Descope is working to give security teams the ability to manage which agents are authorized to connect to which tools inside their organization, as well as the level of permissions the agents receive from the tools, Bhargava said.
“We are able to do pretty much a full life-cycle management on the agent: creating an agent, on-boarding an agent, revoking an agent, removing permissions of an agent,” he said.
The idea, Bhargava said, is that security teams will put in controls and policies for agentic while configuring the dozens of tools involved with creating AI agents. Then, the developers can use Descope’s SDKs to build the agents, he said. If this type of process is not followed, the security risks can be substantial, according to Bhargava.
“The alternative is the security team has no idea about what agents got deployed, what tools they’re connected to, what level of permission these agents have, and they have no control and no visibility,” he said. “We are starting to see customers already engage [around this issue]. They’re saying, ‘We are blocking our developers from deploying agents, but we want to enable them.’ This is the way to securely enable them.”
The larger, well-established identity security vendors—including SailPoint, Okta and Ping Identity—also have quickly embraced the opportunity to help solve some of the foremost agentic security challenges, according to the CEOs of the companies. A recent survey conducted by SailPoint found 80 percent of respondents reporting that AI agents had taken actions that were not intended, such as accessing an unauthorized system or sharing sensitive data. In an interview with CRN, SailPoint founder and CEO Mark McClain said that the Austin, Texas-based vendor is heavily focused right now on addressing many of the “really hard problems in the world of agentic.”
For instance, AI agents will need to be allotted specific levels of controlled access to certain systems or data, McClain said, something that SailPoint has long specialized in for human workers.
“We have to understand an incredibly wide array of [factors] in these large-scale enterprises—and we have to go deep in all of those things to do what we do, to control what you can actually do inside that application,” he said.
San Francisco-based Okta, meanwhile, is putting significant focus into helping developers to build agents that will have strong authentication while using APIs in a secure way, according to Okta co-founder and CEO Todd McKinnon.
There’s no question that it would pose a massive security risk if an agent were compromised and exploited, but there are a number of basic steps that can be taken before even getting to that issue, McKinnon told CRN.
“The basics are you have a bunch of API access tokens strewn all over your company, whether it’s in emails or in Slack or in source code control. You’ve got to get those under control and clean those up,” he said. “Because once you start having these agents, the number of those things is going to be far greater, so the risk is going to be higher.”
For Denver-based Ping Identity, capabilities aimed at enabling adoption of AI agents include helping companies to build automation into their identity services, taking many manual steps out of the process, according to founder and CEO Andre Durand.
This is crucial because the huge scale that agents will be operating on in the future will make it impossible for humans to manage the associated identity and authentication risks without significant automation, Durand said.
“[Ultimately], all of these agents will have incredible access to data on our behalf and will need to be authenticated, authorized and governed,” he said.
The identity and access challenges around AI agents have also received significant attention from Microsoft, whose Entra ID technology is ubiquitous in the business world. In May, Redmond, Wash.-based Microsoft said it is seeking to proactively eliminate top security issues related to the growing adoption of AI agents with the unveiling of its Entra Agent ID offering. Entra Agent ID enables organizations to gain improved visibility into agents while also allowing for application of identity and access policies through Microsoft’s Conditional Access capabilities, according to Alex Simons, corporate vice president for product management and identity security at Microsoft. The capabilities simplify management and security for AI agents and are crucial because “the scale of the sprawl is going to be so big [and happen] so fast” within many organizations, Simons said. The goal of Entra Agent ID, initially available in public preview, will ultimately be to allow customers to “confidently start adopting agentic AI,” he said. On the other end of the spectrum, Microsoft has also been focusing on uncovering security risks and vulnerabilities that are specific to agents, including through red-team assessments, according to Ram Shankar Siva Kumar, head of Microsoft’s AI Red Team. The unique challenges posed by AI agents include their ability to remember information and make decisions, as well as their interactions with other agents, he noted. Key threats include the potential for attackers to “poison” agents, extract sensitive data from an agent’s memory or launch a prompt-injection attack, where the threat actor seeks to manipulate an agent by inserting malicious instructions, Kumar said. The stakes are high when it comes to rooting out security risks posed by agents, in part because organizations will need to know they can trust agents to maximize their value, he said.
“Proactively red-teaming [the technology] before it reaches the hands of the customer is so vital for us,” Kumar said.
The arrival of agentic technology also creates scenarios that will make transparency and security even more critical, experts said. For instance, at Accenture, “our perspective is that you’re going to quickly see hundreds and thousands of agents being created—not only because the technology is very powerful, but by nature, for an agent to complete a task, sometimes it will have to spawn a new agent,” said Daniel Kendzior, global data and AI security practice leader at Accenture.
“For your ability to do that, you have to now create, effectively, a new layer in your stack, which we think of as an agentic platform,” Kendzior said. Notably, such a platform should at the same time provide a mechanism for management, security and control of the agents, he said.
A key piece of the puzzle is undoubtedly also the emergence of standards focused on driving agent interaction, experts said. Anthropic’s Model Context Protocol has become a popular framework for communications between AI models and other systems since its introduction in November 2024, but it has also lacked security specifications until recently. Another protocol for agentic communications, Agent2Agent, originally introduced by Google Cloud, was built with a secure-by-default approach from the start, according to the tech giant. The Agent2Agent protocol is “designed to support enterprise-grade authentication and authorization,” Google Cloud said in its post announcing the protocol in April. To use the protocol, authentication and authorization between agents is required, noted McDougald of Accenture, which was among the partners that worked on the Agent2Agent project. From there, “you can then create a secure tunnel between the agents so the communication between them is secure as well,” he said. The protocol “exposes an agent on the internet and says, ‘This is what the functionality of this agent can [provide].’ So agents can discover agents,” he added. “It looks like it’s the beginning of establishing an agent market.” It’s now abundantly clear that the use cases with AI agents will be far more complex than with the typical LLM-powered applications available so far, according to Evotek’s Shufeldt.
“And because they’re more complex,” he said, “your risks are going to be greater.”

Originally published at CRN on August 6, 2025

Frequently Asked Questions (FAQ)

AI Agent Security and Autonomy

Q: What are the primary security risks associated with granting AI agents more autonomy? A: The main security risks revolve around identity and access management. If AI agents are not properly configured with granular access privileges, they could potentially access unauthorized systems or sensitive data, leading to data breaches. Q: How does the autonomy of AI agents amplify security concerns compared to human workers? A: AI agents can operate at a much larger scale, potentially interacting with billions of systems. This increased scale means that if security is compromised, the impact and the difficulty in detecting breaches are significantly magnified. Q: What does "agentic AI" refer to in the context of this article? A: Agentic AI refers to AI systems that can not only reason and make decisions but also take autonomous action based on those decisions, often chaining multiple AI models together to achieve complex goals.

Identity and Access Management for AI Agents

Q: Why is identity and access management crucial for AI agents? A: AI agents need to access various systems and data to perform their tasks. Properly managing their identities and access privileges ensures they only access what they are authorized to, preventing misuse and security breaches. Q: How do the identity management needs of AI agents differ from those of human workers? A: Unlike human identities that are generally static, AI agents may require dynamic access based on time, task, and context, necessitating different management and governance paradigms for scalability. Q: What role do solution providers play in securing AI agents? A: Solution providers can help organizations develop holistic strategies for agentic AI, focusing on advisory services to assess operational choices and implementing robust identity and access controls.

Challenges and Opportunities in Agentic AI

Q: What are the main challenges organizations face when adopting AI agents? A: Key challenges include the complexity of managing identities and access at scale, ensuring real-time oversight of agent actions, and addressing potential security vulnerabilities unique to agentic AI. Q: What opportunities exist for solution providers in the agentic AI space? A: There is a significant opportunity for solution providers to assist organizations in navigating the security and operational complexities of AI agents, particularly in identity governance, access management, and overall strategic advisory. Q: How can organizations ensure the security of their AI agents? A: Organizations need to implement security measures that are "on steroids" to match the amplified risks of agentic AI. This includes robust identity management, continuous monitoring, and proactive security assessments.

Future of AI Agents

Q: What is the potential scale of AI agents in the future? A: Experts envision potentially billions of agents operating across the internet, forming a marketplace where they can discover and interact with each other. Q: What are some emerging standards for AI agent communication? A: Protocols like Anthropic's Model Context Protocol and Google Cloud's Agent2Agent are emerging to facilitate secure communication and interaction between AI agents.

Crypto Market AI's Take

The increasing autonomy of AI agents presents a dual-edged sword: unprecedented potential for productivity gains alongside significant security challenges, particularly concerning identity and access management. As AI agents become more sophisticated and ubiquitous, the need for robust security frameworks that can handle the dynamic and large-scale nature of their operations becomes paramount. This aligns with our mission at Crypto Market AI to provide secure and intelligent solutions for the digital asset space. Our platform leverages advanced AI and machine learning to offer sophisticated market analysis, secure trading capabilities, and automated strategies. Understanding and mitigating the risks associated with agentic AI is crucial for ensuring the safe and effective integration of these technologies into business operations, a principle we uphold in our own development and service offerings. Explore our insights on AI in Finance to learn more about how AI is transforming the financial landscape.

More to Read: