AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Spread of AI agents sparks fears of a cybersecurity crisis
cybersecurity

Spread of AI agents sparks fears of a cybersecurity crisis

Widespread adoption of agentic AI raises cybersecurity risks due to insufficient API governance and consumer trust issues.

August 12, 2025
5 min read
iandavidbarker

Spread of AI Agents Sparks Fears of a Cybersecurity Crisis

A new report reveals an increasing trust gap between businesses deploying agentic AI for external communications and consumers wary of sharing personal information due to security concerns. The research, carried out by Censuswide for Salt Security, warns that without proper API discovery, governance, and security, the very technology designed to drive smarter customer engagement could open the door to cybersecurity issues including attacks or data leakage. Over half (53%) of organizations using agentic AI say they are already deploying it, or plan to, for customer-facing roles. 48% currently use between six and 20 types of AI agents, and 19% deploy between 21 and 50. Additionally, 37% of organizations report that one to 100 AI agents are currently active within their systems, and almost a fifth (18%) host between 501 and 1000 agents. However, despite widespread use, only 32% conduct daily API risk assessments, and just 37% have a dedicated API security solution. The same percentage have a dedicated data privacy team overseeing AI initiatives. On the consumer side, 64% have interacted with AI chatbots more frequently in the past year, and 80% of those have shared personal information during these interactions. Indeed, 44% say they’ve felt pressured to share information just to complete a task. Only 22% of consumers are comfortable sharing data with AI agents, compared to 37% who trust interactions over the phone and 54% in person.
“Agentic AI is changing the way businesses operate, but consumers are clearly signaling a lack of confidence,” says Michael Callahan, CMO at Salt Security. “What many organizations overlook is that the safety and success of AI depends on APIs that power it and they must be effectively discovered, governed and secured. Otherwise, the trust gap will widen, and the risks will escalate.”
You can get the full report, which includes recommended security actions, from the Salt Security site.
Image Credit: Twoapril Studio / Dreamstime.com
Source: Originally published at BetaNews on Tue, 12 Aug 2025 12:51:36 GMT.

Frequently Asked Questions (FAQ)

Understanding Agentic AI and Consumer Trust

Q: What are agentic AI agents? A: Agentic AI agents are artificial intelligence systems designed to perform tasks autonomously, often interacting with external systems and users to achieve specific goals. Q: Why are consumers wary of sharing data with AI agents? A: Consumers express concerns about data privacy and security, leading to a trust gap when interacting with AI agents, especially compared to human interactions. Many feel pressured to share information even when uncomfortable. Q: What percentage of businesses are deploying AI agents for customer-facing roles? A: Over half (53%) of organizations surveyed are already deploying or plan to deploy agentic AI for customer-facing roles. Q: How many AI agents are typically deployed by organizations? A: Many organizations deploy a significant number of AI agents, with 48% using between six and 20 types, and 19% using between 21 and 50. Some even host hundreds or thousands.

Security and Governance of AI Agents

Q: What are the main cybersecurity risks associated with AI agents? A: The primary risks include attacks and data leakage due to a lack of proper API discovery, governance, and security measures. Q: What percentage of organizations conduct daily API risk assessments? A: Only 32% of organizations using agentic AI conduct daily API risk assessments. Q: Do organizations have dedicated API security solutions for AI initiatives? A: Just 37% of organizations have a dedicated API security solution, and the same percentage have a dedicated data privacy team overseeing AI initiatives. Q: What is the key to ensuring the safety and success of AI agents? A: According to Michael Callahan of Salt Security, the safety and success of AI depend on the APIs that power them, which must be effectively discovered, governed, and secured.

Crypto Market AI's Take

The increasing adoption of agentic AI in customer-facing roles presents a significant opportunity for businesses to enhance engagement and efficiency. However, as this report highlights, the current lack of robust API security and data governance practices creates substantial cybersecurity risks. At Crypto Market AI, we understand the critical importance of secure and reliable technology infrastructure. Our platform emphasizes advanced security protocols and compliance measures to protect user data and ensure operational integrity, much like the need for Salt Security's findings to be addressed by businesses deploying AI. Ensuring that the APIs powering these AI agents are secure is paramount to building and maintaining consumer trust in the digital age.

More to Read:


Source: Originally published at BetaNews on Tue, 12 Aug 2025 12:51:36 GMT.