August 13, 2025
5 min read
Paul Wagenseil
AI Agents Running Wild: How Organizations Are Charging Ahead Without AI Safeguards
The great AI gold rush is on. In a recent survey commissioned by Okta, 91% of organizations said they use AI agents for tasks ranging from software coding to automating repetitive jobs and making market forecasts. Productivity gains are a chief motivator for AI adoption, along with the fear of falling behind competitors. Two-thirds of surveyed organizations said AI is "very critical" or "absolutely essential" for business success. Yet, companies may be rushing toward a cliff. Only 10% reported having "well-developed" strategies for managing non-human identities (NHIs), including AI agents. The majority operate without proper safeguards. Organizations are adopting AI without sufficient security or governance. Unfortunately, AI security remains a half-formed mess, repeating vulnerabilities from decades ago."Unmanaged AI agents introduce a new class of security risk because they act autonomously, often with broad access to sensitive systems," said Arnab Bose, Chief Product Officer of the Okta Platform. "Without proper controls, these agents can unintentionally leak data, trigger actions across connected apps, or be exploited by threat actors through techniques like prompt injection or token hijacking."To prevent AI agents from going out of bounds, organizations must tightly control their authorization, authentication, and access. Integrating AI agents with identity-security systems enables management of who uses AI and what AI agents are allowed to access and do. This familiar approach resonates: 52% of Okta survey respondents said identity and access management (IAM) is "very important" for AI integration, and another 33% said it is "important."
How and Why AIs Are Used in the Workplace
Okta commissioned AlphaSights to survey 260 IT executives (CTOs, CISOs, CIOs, CSOs, and VPs) across seven Western countries plus India and Japan. The survey revealed key AI benefits:- Increased productivity (84%)
- Cost savings (60%)
- Better customer experience (48%)
- Streamlined workflows (47%)
- Faster decision-making (39%) AI agents and large language models (LLMs) are tasked with:
- Automation and process optimization (84%)
- Coding and software development (74%)
- Content generation (68%)
- Natural language processing (66%)
- Predictive analysis and forecasting (55%) Despite benefits, concerns remain:
- Data privacy (68%)
- Security risks (60%)
- Compliance and governance (37%)
- Lack of transparency (35%)
- Ethical and bias issues (34%)
- Job displacement (4%) There is some misunderstanding about AI agents. More respondents felt confident managing AI than managing NHIs, the broader category including AI agents and LLMs. Thirty-six percent said they "currently have a centralized governance model for AI," yet only 10% properly manage NHIs. Concerns about AI governance and oversight were expressed by 58%, and 50% worried about compliance and regulatory requirements. However, 78% saw controlling NHI access and permissions, and 69% governing NHI lifecycles, as pressing security issues.
- AI Agents Are Broken: Can GPT-5 Fix Them?
- AI Agents: Capabilities, Risks, and Their Growing Role
- Turbocharged Cyberattacks Are Coming Under Empowered AI Agents
"Given the data AI agents access now and in the future, it's essential to have the same controls as for human agents," said a UK technology executive.
The Many Security Risks of AIs
AI agents can be unpredictable and powerful, like "small children in super-soldier mecha suits" who can escape sandboxes and cause havoc. Many company-run AI bots are accessible from the open internet, increasing risk. AI agents often collect data from across the internet, making them vulnerable to data poisoning. Once compromised, an AI agent cannot be trusted because false information contaminates its training data. A Salesforce study found LLMs do not inherently understand the importance of keeping sensitive data secret. Some AIs have even assisted attackers in exfiltrating data because they are designed to comply with commands. Many LLMs cannot distinguish data from commands, allowing attackers to embed malicious prompts, similar to SQL injection attacks on databases. Filtering malicious prompts by blocking words is ineffective because language is flexible and prompts can be reworded. This non-deterministic nature makes AIs ill-suited for APIs, which require predictable inputs. To address this, Anthropic introduced the Model Context Protocol (MCP) in late 2024, a server-client model matching AI agents with applications and business tools. MCP is widely adopted but has serious security flaws, including vulnerabilities to prompt injection, typosquatting, malicious updates, permission reuse, and cross-tool contamination. Google's peer-to-peer Agent2Agent Protocol (A2A), unveiled in early 2025, is generally safer but still exploitable by malicious tools. Both MCP and A2A use the OAuth standard to cross-authorize AI agents and tools but do not authenticate tools, allowing attackers to impersonate tools through name swaps or false capabilities."I'm most concerned about AI systems having too much access without proper controls," said an Australian healthcare executive. "Strong oversight and access control are essential to keep AI secure."
How to Extend Identity Controls to Manage AI
The primary requirement for AI security is acknowledging AI agents will make mistakes and are vulnerable to attacks. Without safeguards, it's like everyone driving super-fast cars without seatbelts. These "seatbelts" exist in identity and access management (IAM) controls. Organizations can manage who and what accesses AI agents and what AI agents themselves can access. Researchers have extended the OAuth standard to let identity provisioning systems control access to and from AI agents and their tools. Okta calls this Cross App Access, compatible with any IAM system supporting OAuth."With Cross App Access, organizations can define exactly which agents or applications can connect, what data they can access, and under what conditions," said Arnab Bose. "IT can centrally manage, audit, and instantly revoke these connections if needed."Like humans, AI agents and LLMs can be authorized, authenticated, provisioned, and granted access to specific tools and data while denied others. The principle of least privilege applies, limiting AI permissions to only what is necessary. At Black Hat 2025, a briefing emphasized imposing a zero-trust model on AI, assuming AI agents can be compromised anytime and designing security accordingly.
"Governance and access control are critical given AI's level of access and execution ability," said a U.S. banking executive.
Frequently Asked Questions (FAQ)
Understanding AI Agents and Safeguards
Q: What are AI agents and why are organizations adopting them so rapidly? A: AI agents are software programs that use artificial intelligence to perform tasks autonomously, ranging from coding and automating repetitive jobs to making market forecasts. Organizations are adopting them primarily for productivity gains and to avoid falling behind competitors. Q: What is the main concern regarding the current adoption of AI agents? A: The primary concern is that many organizations are implementing AI agents without adequate security or governance strategies. Only a small percentage have well-developed strategies for managing these non-human identities (NHIs), leaving them vulnerable to risks. Q: What specific security risks do unmanaged AI agents pose? A: Unmanaged AI agents can unintentionally leak data, trigger unintended actions across connected applications, or be exploited by threat actors through methods like prompt injection or token hijacking. They can also be susceptible to data poisoning, making their outputs unreliable. Q: How can organizations ensure the secure use of AI agents? A: Organizations can ensure security by tightly controlling the authorization, authentication, and access of AI agents. Integrating them with identity-security systems (IAM) is crucial for managing who uses them and what they can access. Q: What is the role of Identity and Access Management (IAM) in AI security? A: IAM systems can extend familiar controls to AI agents, enabling management of their access permissions, authentication, and provisioning, similar to how human users are managed. This helps in adhering to the principle of least privilege. Q: What is the significance of a "zero-trust model" for AI agents? A: A zero-trust model assumes that AI agents can be compromised at any time, prompting organizations to design their security architecture with this assumption in mind, implementing robust controls and continuous verification.Crypto Market AI's Take
The rapid adoption of AI agents in organizations, as highlighted in the article, underscores a critical shift in how businesses operate. At AI Crypto Market, we see AI agents not just as tools for efficiency, but as sophisticated entities that require robust management, much like any other critical digital asset or identity within an organization. Our platform is built on the principle that AI should augment human capabilities securely and transparently. We believe that effective identity and access management, coupled with a strong understanding of potential risks like prompt injection, are paramount. This aligns with the growing industry focus on securing these powerful tools, a perspective we deeply share. For a deeper dive into how AI is transforming the financial sector and the necessary security considerations, explore our insights on AI Agents and their impact on the Crypto Market.More to Read:
Originally published at SC Media on August 13, 2025.