AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Spread of AI agents sparks fears of a cybersecurity crisis
cybersecurity

Spread of AI agents sparks fears of a cybersecurity crisis

New research warns that widespread AI agents increase cybersecurity risks without proper API governance and security measures.

August 13, 2025
5 min read
iandavidbarker

Spread of AI Agents Sparks Fears of a Cybersecurity Crisis

A recent report reveals a growing trust gap between businesses deploying agentic AI for external communications and consumers who remain wary of sharing personal information due to security concerns. The research, conducted by Censuswide for Salt Security, highlights that without effective API discovery, governance, and security, the very technology designed to enhance customer engagement could expose organizations to cybersecurity threats such as attacks and data leakage. Over half (53%) of organizations using agentic AI report they are already deploying it, or plan to, for customer-facing roles. Among these, 48% currently use between six and 20 types of AI agents, while 19% deploy between 21 and 50. Additionally, 37% of organizations have between one and 100 AI agents active within their systems, and nearly a fifth (18%) host between 501 and 1000 agents. Despite this widespread adoption, only 32% conduct daily API risk assessments, and just 37% have a dedicated API security solution. The same percentage have a dedicated data privacy team overseeing AI initiatives. From the consumer perspective, 64% have interacted more frequently with AI chatbots in the past year, and 80% of those have shared personal information during these interactions. However, 44% report feeling pressured to share information just to complete a task. Trust levels remain low: only 22% of consumers feel comfortable sharing data with AI agents, compared to 37% who trust phone interactions and 54% who trust in-person communications. Michael Callahan, CMO at Salt Security, states, "Agentic AI is changing the way businesses operate, but consumers are clearly signaling a lack of confidence. What many organizations overlook is that the safety and success of AI depends on APIs that power it and they must be effectively discovered, governed and secured. Otherwise, the trust gap will widen, and the risks will escalate." For organizations interested in mitigating these risks, the full report includes recommended security actions and is available on the Salt Security website.
Image Credit: Twoapril Studio/Dreamstime.com
Source: Originally published at BetaNews on August 12, 2025.

Frequently Asked Questions (FAQ)

Consumer Trust and AI Agents

Q: Why are consumers hesitant to share personal information with AI agents? A: Consumers remain wary of sharing personal information with AI agents due to security concerns, leading to a trust gap with businesses deploying these technologies. Q: How do consumers perceive interactions with AI agents compared to human interactions? A: Consumers generally trust in-person communications (54%) and phone interactions (37%) more than AI agents (22%) when it comes to sharing data. Q: Do consumers feel pressured to share information when interacting with AI? A: Yes, 44% of consumers report feeling pressured to share information simply to complete a task when interacting with AI chatbots.

Business Adoption and Security Practices

Q: What percentage of organizations are deploying or planning to deploy AI agents for customer-facing roles? A: Over half (53%) of organizations are already deploying or plan to deploy AI agents for customer-facing roles. Q: What are the common numbers of AI agents organizations are deploying? A: Many organizations deploy between six and 20 types of AI agents (48%), with a significant portion deploying 21 to 50 types (19%). Some even have between 501 and 1000 active agents. Q: What is the current state of API security and risk assessment in organizations using AI agents? A: A significant gap exists, with only 32% conducting daily API risk assessments and just 37% having a dedicated API security solution. Similarly, only 37% have a dedicated data privacy team overseeing AI initiatives.

Risks Associated with AI Agent Deployment

Q: What are the primary cybersecurity threats highlighted in the report regarding AI agent deployment? A: The report points to attacks and data leakage as significant threats, particularly when API discovery, governance, and security are not effectively managed. Q: What is the crucial factor for the safe and successful operation of AI agents? A: Michael Callahan of Salt Security emphasizes that the safety and success of AI depend on properly discovered, governed, and secured APIs.

Crypto Market AI's Take

The increasing reliance on AI agents for customer interaction, as highlighted in this report, directly impacts the broader landscape of digital trust and security. As more businesses integrate AI into their operations, particularly those handling sensitive customer data, the cybersecurity implications become paramount. For organizations in the cryptocurrency space, where security and trust are foundational, a similar focus on robust API management and data privacy is essential. Understanding the vulnerabilities associated with AI agents and ensuring their secure deployment is crucial for maintaining user confidence and preventing data breaches. Our platform at Crypto Market AI focuses on leveraging AI for market intelligence and trading, but we also recognize the critical importance of secure infrastructure and ethical AI deployment to safeguard user assets and data.

More to Read:

Source: Originally published at BetaNews on August 12, 2025.