Cyata Security Ltd. has secured $8.5 million in seed funding to develop solutions for managing "out-of-control" AI agents within enterprise environments. The funding round was spearheaded by TLV Partners, with participation from notable figures like Ron Serber and Yossi Carmil, former CEOs of Cellebrite DI Ltd.
The core problem Cyata addresses is the increasing use of AI agents in businesses, often described as "digital employees." These agents automate tasks rapidly and cost-effectively, but they operate outside traditional identity and security frameworks, making them difficult to monitor and govern. This lack of oversight creates significant risks, as AI agents can potentially rewrite code, leak sensitive data, expose secrets, or even manipulate financial accounts without leaving clear audit trails.
According to Cyata's co-founder and CEO, Shahar Tal, AI agents differ significantly from human employees or traditional service accounts. They are dynamic, appearing and disappearing instantaneously across workflows and acting autonomously. Furthermore, they are susceptible to "hallucinations," which can lead to erroneous decisions, and they can be targeted by malicious actors for exploitation. Tal likens the advent of AI agents to "the biggest leap in enterprise technology since the cloud," highlighting their self-scaling, tireless nature in coding, analysis, and execution.
To combat these risks, Cyata has developed an "agentic control plane." This system offers comprehensive visibility into AI agents operating within cloud environments, encompassing chatbots, coding bots, and other task-driven agents. A key component of their offering is an automated AI agent discovery tool that scans cloud and SaaS environments, along with identity management systems, to identify AI agents based on their behavioral patterns. Once unauthorized agents are detected, the platform can lock them down and enforce the principle of least privilege to prevent potential damage.
Cyata also provides forensic observability tools for authorized AI agents. These tools create detailed audit trails and capture the intent behind an agent's actions by requiring real-time justifications, enabling granular access controls that limit agents to only the necessary systems and databases. Tal emphasizes that Cyata's focus is on the "actors," meaning the AI agents themselves, rather than the underlying Large Language Models (LLMs). The company aims to equip security teams with identity-grade controls specifically designed for AI agents, allowing enterprises to leverage their power securely.
Existing identity and privileged access management tools are deemed insufficient for AI agents due to their dynamic nature, tendency to share credentials, and rapid disappearance before detection. Robert Burns, Chief Security Officer at Thales Cybersecurity SA, acknowledges the complexity AI agents introduce to traditional identity models and commends Cyata's focus on this emerging security challenge. Brian Sack of TLV Partners anticipates substantial demand for Cyata's platform as AI agent adoption is projected to grow significantly in the coming years, positioning Cyata to address this critical new security category before major breaches occur.
Source: Originally published at
SiliconANGLE on July 30, 2025.
Frequently Asked Questions (FAQ)
Understanding AI Agents and Their Risks
Q: What are AI agents in an enterprise environment?
A: AI agents, often referred to as "digital employees," are automated systems that perform business processes rapidly and cost-effectively. They can handle tasks like rewriting application code, leaking confidential data, sharing secrets, and even moving funds, often without human oversight.
Q: Why are AI agents a security concern?
A: AI agents operate outside traditional identity and security frameworks, making them difficult to monitor and govern. Their dynamic nature, autonomous actions, and susceptibility to "hallucinations" or hacking create significant risks for sensitive enterprise systems and data.
Q: How does Cyata Security address the risks associated with AI agents?
A: Cyata offers an "agentic control plane" that provides visibility into AI agents. Their core product automatically discovers unauthorized agents by scanning cloud environments and identity management systems, then locks them down and enforces least privilege access. They also provide forensic observability for authorized agents to ensure accountability.
Q: What makes AI agents different from traditional service accounts or human employees from a security perspective?
A: Unlike human employees or static service accounts, AI agents are dynamic. They can spawn instantly, spread across workflows, act autonomously, and disappear quickly, making them challenging to track with traditional security tools.
Cyata Security's Solution
Q: What is Cyata's "agentic control plane"?
A: The agentic control plane is Cyata's solution designed to provide comprehensive visibility and control over AI agents operating within enterprise cloud environments. It includes tools for discovering, securing, and monitoring these agents.
Q: How does Cyata's discovery tool work?
A: The tool scans cloud and SaaS environments, along with identity management systems, to identify AI agents by analyzing their behavioral patterns.
Q: What measures does Cyata implement to secure authorized AI agents?
A: For authorized agents, Cyata offers forensic observability tools that create detailed audit trails and capture agent intent by requiring real-time justifications for actions. This enables granular access controls to restrict agents to only necessary systems and databases.
Q: Why are traditional identity and access management tools insufficient for AI agents?
A: Traditional tools struggle with the rapid instantiation, credential sharing, and ephemeral nature of AI agents, making them difficult to detect and manage effectively before potential security incidents occur.
Crypto Market AI's Take
The funding secured by Cyata Security highlights a growing and critical concern in the rapidly evolving landscape of artificial intelligence. As AI agents become more sophisticated and integrated into business operations, the need for robust security measures becomes paramount. This trend is mirrored in the cryptocurrency space, where AI is increasingly being used for trading, market analysis, and even in the development of decentralized applications. However, just as enterprises face challenges in controlling AI agents, the decentralized and often pseudonymous nature of crypto presents its own unique set of security and regulatory hurdles. Companies leveraging AI in the crypto market must also grapple with ensuring the integrity of their AI models, preventing manipulation, and adhering to evolving compliance frameworks. Our platform,
Crypto Market AI, focuses on providing insights into these converging trends, offering tools and analysis that help navigate both the opportunities and risks presented by AI in the financial sector. We also delve into the security implications of advanced technologies, including the challenges and solutions for managing AI-driven threats in our coverage of
AI agents and their capabilities.
More to Read: