August 6, 2025
5 min read
Kyle Alspach
AI agents promise huge productivity gains but require robust identity and access oversight to prevent major security risks.
Mistaken Identity? AI Agent Oversight Key To Success
By Kyle Alspach Granting AI increased autonomy poses critical security risks, opening the door for solution providers to tackle access privileges and more. If the tech industry gets things right with AI agents, it could soon be possible to grant significant autonomy to entire teams of virtual assistants to go out and achieve âvery complex goalsâ all on their own, according to Accentureâs Damon McDougald. âThatâs where everyone has the vision and desire for agents to go,â said McDougald, global cyber protection lead at Dublin, Ireland-based Accenture, No. 1 on CRNâs 2025 Solution Provider 500. However, if industry efforts come up shortâespecially when it comes to securityâthe AI agent revolution and its promise of unprecedented productivity gains could hit major roadblocks, cybersecurity experts and industry executives told CRN. This is especially the case when it comes to security for identities and management of access privileges, already a notoriously difficult area for organizations when it comes to managing human workers. The issue could be exponentially riskier when it comes to autonomous AI workers. In other words, to truly turn the industry vision for AI agents into reality, identity and access considerations should be paramount, experts said.âYou need to be thinking from an identity standpoint of how do [agents] get access to things at different times for different time lengths?â McDougald said. âThatâs different than what we usually do today for humans.
â[However], if we get agents right, weâll see a scale [of productivity improvements] that we havenât seen before,â he said. âAt some point in time, there will potentially be billions of agents running around the internet. There will be a marketplace of agents, and weâre only limited by our compute and imagination on how agents will work. And so the identity tools need to fit in that reality today.âThe potential risks from insecure or misconfigured AI agents are not hard to envision, particularly from an identity and access perspective. The entire purpose of agents is to connect to many different systems and data sources to autonomously accomplish tasks. But if such access is not actually supposed to be authorized, a data breach is the probable result.
â[With AI agents], the need for the access to data is going to typically be greater,â said Matt Shufeldt, chief solutions officer at San Diego-based systems integrator Evotek, No. 92 on CRNâs 2025 Solution Provider 500. âAnd because of that, you need to solve for those data management issues more quickly.âExacerbating the issue is the fact that because agents are charged with executing tasks without constant human oversight, breaches could go entirely unnoticed for a period of time. The risks may sound familiarâakin to those that the industry grappled with from the arrival of GenAIâbut are magnified several times over when it comes to agents, experts said.
âWhatever we thought were the problems in the GenAI or LLM world, agentic is multiple times that,â said Ankur Shah, cofounder and CEO at Straiker, a Sunnyvale, Calif.-based startup focused on AI security.
âAgents are basically LLMs that can reason, make decisions and take action on [those decisions]. Theyâre chained together, so they are nothing but GenAI on steroids,â Shah said. âAnd so you also have to think about putting your security on steroids in the agentic world.âSolution providers that can help organizations deal with the challenges around agentic AI will find no shortage of opportunity going forward, solution provider executives told CRN. The bottom line is that the idea of granting AI greater autonomy than ever before raises new security questions that most organizations are not equipped to answer on their own, executives said.
âThatâs where I see us just having a ton of opportunity,â said Kevin Lynch, CEO of Denver-based Optiv, No. 28 on CRNâs 2025 Solution Provider 500.
âI think it starts in our advisory motion by looking at helping the client to think holistically [about agentic AI],â Lynch said. âWeâre helping that client to think through the implications of their operational choices.âExperts told CRN that oversight of identity and access issues will be central to any strategy around enabling AI agents. Whereas traditional identities have been tied to humans, the larger scale on which agents will operate means major changes ahead for management, governance and authorization of identities, according to Alex Bovee, co-founder and CEO of ConductorOne, an identity governance startup focused on agentic AI with headquarters in Portland, Ore., and San Francisco.
âItâs just completely different patterns and paradigms for how you would manage those identities at scale,â Bovee said.Indeed, the maxim that security is only possible when you first have visibility seems truer than ever when it comes to agentic technologies, according to vendor and solution provider executives. Real-time oversight around the actions taken by AI agents will be pivotal, which will likely mean adding an extra layer of enforcement on top of AI agents that will ensure they are not going astray from what is expected, Bovee said.
â[With AI agents], now you have not just a user visibility and transparency challenge,â said Ben Prescott, head of AI solutions at Irvine, Calif.-based Trace3, No. 34 on CRNâs 2025 Solution Provider 500. âNow you [have to know] what is the agentic solution itself actually planning and executing? And how do we understand what the right output is that is actually generating within that agentic workflow?âOne identity security startup working with Trace3 is Descope, which is seeking to become a go-to agentic identity provider for the coming era of AI agents, according to Descope co-founder Rishi Bhargava. Los Altos, Calif.-based Descope is working to give security teams the ability to manage which agents are authorized to connect to which tools inside their organization, as well as the level of permissions the agents receive from the tools, Bhargava said.
âWe are able to do pretty much a full life-cycle management on the agent: creating an agent, on-boarding an agent, revoking an agent, removing permissions of an agent,â he said.The idea, Bhargava said, is that security teams will put in controls and policies for agentic while configuring the dozens of tools involved with creating AI agents. Then, the developers can use Descopeâs SDKs to build the agents, he said. If this type of process is not followed, the security risks can be substantial, according to Bhargava.
âThe alternative is the security team has no idea about what agents got deployed, what tools theyâre connected to, what level of permission these agents have, and they have no control and no visibility,â he said. âWe are starting to see customers already engage [around this issue]. Theyâre saying, âWe are blocking our developers from deploying agents, but we want to enable them.â This is the way to securely enable them.âThe larger, well-established identity security vendorsâincluding SailPoint, Okta and Ping Identityâalso have quickly embraced the opportunity to help solve some of the foremost agentic security challenges, according to the CEOs of the companies. A recent survey conducted by SailPoint found 80 percent of respondents reporting that AI agents had taken actions that were not intended, such as accessing an unauthorized system or sharing sensitive data. In an interview with CRN, SailPoint founder and CEO Mark McClain said that the Austin, Texas-based vendor is heavily focused right now on addressing many of the âreally hard problems in the world of agentic.â
For instance, AI agents will need to be allotted specific levels of controlled access to certain systems or data, McClain said, something that SailPoint has long specialized in for human workers.
âWe have to understand an incredibly wide array of [factors] in these large-scale enterprisesâand we have to go deep in all of those things to do what we do, to control what you can actually do inside that application,â he said.San Francisco-based Okta, meanwhile, is putting significant focus into helping developers to build agents that will have strong authentication while using APIs in a secure way, according to Okta co-founder and CEO Todd McKinnon.
Thereâs no question that it would pose a massive security risk if an agent were compromised and exploited, but there are a number of basic steps that can be taken before even getting to that issue, McKinnon told CRN.
âThe basics are you have a bunch of API access tokens strewn all over your company, whether itâs in emails or in Slack or in source code control. Youâve got to get those under control and clean those up,â he said. âBecause once you start having these agents, the number of those things is going to be far greater, so the risk is going to be higher.âFor Denver-based Ping Identity, capabilities aimed at enabling adoption of AI agents include helping companies to build automation into their identity services, taking many manual steps out of the process, according to founder and CEO Andre Durand.
This is crucial because the huge scale that agents will be operating on in the future will make it impossible for humans to manage the associated identity and authentication risks without significant automation, Durand said.
â[Ultimately], all of these agents will have incredible access to data on our behalf and will need to be authenticated, authorized and governed,â he said.The identity and access challenges around AI agents have also received significant attention from Microsoft, whose Entra ID technology is ubiquitous in the business world. In May, Redmond, Wash.-based Microsoft said it is seeking to proactively eliminate top security issues related to the growing adoption of AI agents with the unveiling of its Entra Agent ID offering. Entra Agent ID enables organizations to gain improved visibility into agents while also allowing for application of identity and access policies through Microsoftâs Conditional Access capabilities, according to Alex Simons, corporate vice president for product management and identity security at Microsoft. The capabilities simplify management and security for AI agents and are crucial because âthe scale of the sprawl is going to be so big [and happen] so fastâ within many organizations, Simons said. The goal of Entra Agent ID, initially available in public preview, will ultimately be to allow customers to âconfidently start adopting agentic AI,â he said. On the other end of the spectrum, Microsoft has also been focusing on uncovering security risks and vulnerabilities that are specific to agents, including through red-team assessments, according to Ram Shankar Siva Kumar, head of Microsoftâs AI Red Team. The unique challenges posed by AI agents include their ability to remember information and make decisions, as well as their interactions with other agents, he noted. Key threats include the potential for attackers to âpoisonâ agents, extract sensitive data from an agentâs memory or launch a prompt-injection attack, where the threat actor seeks to manipulate an agent by inserting malicious instructions, Kumar said. The stakes are high when it comes to rooting out security risks posed by agents, in part because organizations will need to know they can trust agents to maximize their value, he said.
âProactively red-teaming [the technology] before it reaches the hands of the customer is so vital for us,â Kumar said.The arrival of agentic technology also creates scenarios that will make transparency and security even more critical, experts said. For instance, at Accenture, âour perspective is that youâre going to quickly see hundreds and thousands of agents being createdânot only because the technology is very powerful, but by nature, for an agent to complete a task, sometimes it will have to spawn a new agent,â said Daniel Kendzior, global data and AI security practice leader at Accenture.
âFor your ability to do that, you have to now create, effectively, a new layer in your stack, which we think of as an agentic platform,â Kendzior said. Notably, such a platform should at the same time provide a mechanism for management, security and control of the agents, he said.A key piece of the puzzle is undoubtedly also the emergence of standards focused on driving agent interaction, experts said. Anthropicâs Model Context Protocol has become a popular framework for communications between AI models and other systems since its introduction in November 2024, but it has also lacked security specifications until recently. Another protocol for agentic communications, Agent2Agent, originally introduced by Google Cloud, was built with a secure-by-default approach from the start, according to the tech giant. The Agent2Agent protocol is âdesigned to support enterprise-grade authentication and authorization,â Google Cloud said in its post announcing the protocol in April. To use the protocol, authentication and authorization between agents is required, noted McDougald of Accenture, which was among the partners that worked on the Agent2Agent project. From there, âyou can then create a secure tunnel between the agents so the communication between them is secure as well,â he said. The protocol âexposes an agent on the internet and says, âThis is what the functionality of this agent can [provide].â So agents can discover agents,â he added. âIt looks like itâs the beginning of establishing an agent market.â Itâs now abundantly clear that the use cases with AI agents will be far more complex than with the typical LLM-powered applications available so far, according to Evotekâs Shufeldt.
âAnd because theyâre more complex,â he said, âyour risks are going to be greater.â
Originally published at CRN on August 6, 2025