Spread of AI Agents Sparks Fears of a Cybersecurity Crisis
A new report reveals an increasing trust gap between businesses deploying agentic
AI for external communications and consumers wary of sharing personal information due to security concerns.
The research, carried out by
Censuswide for
Salt Security, warns that without proper API discovery, governance, and security, the very technology meant to drive smarter customer engagement could open the door to
cybersecurity issues including attacks or data leakage.
Over half (53%) of organizations using agentic AI say they are already deploying it, or plan to, for customer-facing roles. 48% currently use between six and 20 types of AI agents, and 19% deploy between 21 and 50. Additionally, 37% of organizations report that one to 100 AI agents are currently active within their systems, and almost a fifth (18%) host between 501 and 1000 agents.
However, despite the widespread use of this technology, only 32% say they conduct daily API risk assessments, and just 37% have a dedicated API security solution. The same percentage have a dedicated data privacy team overseeing AI initiatives.
On the consumer side, 64% have interacted with AI chatbots more frequently in the past year, and 80% of those have shared personal information during these interactions. Notably, 44% say they’ve felt pressured to share information just to complete a task.
Only 22% of consumers are comfortable sharing data with AI agents, compared to 37% who trust interactions over the phone and 54% in person.
“Agentic AI is changing the way businesses operate, but consumers are clearly signaling a lack of confidence,” says Michael Callahan, CMO at Salt Security. “What many organizations overlook is that the safety and success of AI depends on APIs that power it and they must be effectively discovered, governed and secured. Otherwise, the trust gap will widen, and the risks will escalate.”
You can get the
full report, which includes recommended security actions, from the Salt Security site.
Image Credit: Twoapril Studio/
Dreamstime.com
Source: Originally published at BetaNews on August 12, 2025.
Frequently Asked Questions (FAQ)
Consumer Trust and AI Agents
Q: Why are consumers wary of sharing personal information with AI agents?
A: Consumers are wary due to security concerns, leading to a trust gap. Reports indicate that 64% of consumers have increased their interaction with AI chatbots, with 80% of those sharing personal information. Notably, 44% feel pressured to share data just to complete a task.
Q: How does consumer trust in AI agents compare to other communication methods?
A: Consumer comfort levels are significantly lower with AI agents compared to other methods. Only 22% of consumers are comfortable sharing data with AI agents, while 37% trust phone interactions and 54% trust in-person interactions.
Business Adoption of AI Agents
Q: What percentage of organizations are deploying or planning to deploy AI agents in customer-facing roles?
A: Over half (53%) of organizations are already deploying or plan to deploy agentic AI for customer-facing roles.
Q: How many AI agents are organizations currently deploying?
A: 48% of organizations use between six and 20 types of AI agents, and 19% deploy between 21 and 50 types. Furthermore, 37% report having one to 100 AI agents active in their systems, and 18% host between 501 and 1000 agents.
Cybersecurity Risks and API Security
Q: What are the main cybersecurity risks associated with agentic AI according to the report?
A: The report warns that without proper API discovery, governance, and security, agentic AI can lead to cybersecurity issues such as attacks and data leakage.
Q: What percentage of organizations conduct regular API risk assessments?
A: Only 32% of organizations using agentic AI conduct daily API risk assessments.
Q: Do organizations have dedicated solutions for API security or data privacy oversight for AI initiatives?
A: Just 37% of organizations have a dedicated API security solution, and the same percentage have a dedicated data privacy team overseeing AI initiatives.
Crypto Market AI's Take
The increasing reliance on AI agents for customer interactions, as highlighted in this report, underscores a critical need for robust API security and data governance. At Crypto Market AI, we understand that advancements in AI, particularly agentic AI, must be coupled with strong security infrastructure to foster user trust. Our platform emphasizes secure data handling and advanced analytics, mirroring the critical need for security in AI deployments. For businesses looking to leverage AI while mitigating risks, understanding secure API management and data privacy is paramount. Explore our insights on
AI agents in the financial sector to learn more about the intersection of AI and financial technology.
More to Read: