AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Spread of AI agents sparks fears of a cybersecurity crisis
cybersecurity

Spread of AI agents sparks fears of a cybersecurity crisis

New research warns that widespread AI agent deployment without strong API security could trigger a cybersecurity crisis.

August 13, 2025
5 min read
iandavidbarker

Spread of AI Agents Sparks Fears of a Cybersecurity Crisis

A new report reveals an increasing trust gap between businesses deploying agentic AI for external communications and consumers wary of sharing personal information due to security concerns. The research, carried out by Censuswide for Salt Security, warns that without proper API discovery, governance, and security, the very technology meant to drive smarter customer engagement could open the door to cybersecurity issues including attacks or data leakage. Over half (53%) of organizations using agentic AI say they are already deploying it, or plan to, for customer-facing roles. 48% currently use between six and 20 types of AI agents, and 19% deploy between 21 and 50. Additionally, 37% of organizations report that one to 100 AI agents are currently active within their systems, while almost a fifth (18%) host between 501 and 1000 agents. However, despite widespread use, only 32% say they conduct daily API risk assessments, and just 37% have a dedicated API security solution. The same percentage have a dedicated data privacy team overseeing AI initiatives. On the consumer side, 64% have interacted with AI chatbots more frequently in the past year, and 80% of those have shared personal information during these interactions. Indeed, 44% say they’ve felt pressured to share information just to complete a task. Only 22% of consumers are comfortable sharing data with AI agents, compared to 37% who trust interactions over the phone and 54% in person.
“Agentic AI is changing the way businesses operate, but consumers are clearly signaling a lack of confidence,” says Michael Callahan, CMO at Salt Security. “What many organizations overlook is that the safety and success of AI depends on APIs that power it and they must be effectively discovered, governed and secured. Otherwise, the trust gap will widen, and the risks will escalate.”
You can get the full report, which includes recommended security actions, from the Salt Security site.
Image Credit: Twoapril Studio/Dreamstime.com
Source: Originally published at BetaNews on August 12, 2025.

Frequently Asked Questions (FAQ)

Consumer Trust and Data Privacy

Q: Why are consumers hesitant to share personal information with AI agents? A: Consumers express concerns about the security of their personal information when interacting with AI agents, leading to a trust gap. They feel pressured to share data to complete tasks and trust in-person interactions more than those with AI. Q: How does consumer trust in AI agents compare to other communication methods? A: Consumers are less comfortable sharing data with AI agents (22%) compared to phone interactions (37%) and in-person interactions (54%).

Business Adoption of AI Agents

Q: What percentage of organizations are deploying or planning to deploy AI agents for customer-facing roles? A: Over half (53%) of organizations are already deploying or plan to deploy AI agents for customer-facing roles. Q: How many AI agents do organizations typically deploy? A: Many organizations deploy a significant number of AI agents, with 48% using between six and 20 types, and 19% using between 21 and 50. Furthermore, 37% have one to 100 active AI agents, and 18% host between 501 and 1000 agents.

API Security and Governance

Q: What are the primary cybersecurity risks associated with AI agents? A: The main risks include potential attacks and data leakage if APIs powering AI agents are not properly discovered, governed, and secured. Q: What percentage of organizations conduct regular API risk assessments? A: Despite the widespread use of AI agents, only 32% of organizations conduct daily API risk assessments. Q: Do organizations have dedicated solutions for API security? A: Just 37% of organizations have a dedicated API security solution in place. Q: Is there a dedicated team overseeing AI initiatives for data privacy? A: The same percentage (37%) that have dedicated API security solutions also have a dedicated data privacy team overseeing AI initiatives.

Crypto Market AI's Take

The increasing reliance on agentic AI for customer interactions, as highlighted by this report, presents a critical challenge for businesses and a significant concern for consumer trust. The lack of robust API discovery, governance, and security measures identified in the research directly correlates with the broader cybersecurity landscape within the rapidly evolving AI sector. At Crypto Market AI, we understand the immense potential of AI to enhance operational efficiency and customer engagement. However, we also recognize that the security and privacy implications must be at the forefront of any AI deployment. This underscores the importance of secure and transparent platforms, much like our own, which prioritizes data integrity and advanced security protocols. Explore how our platform leverages AI for market analysis and trading, ensuring both innovation and security are maintained.

More to Read:


Source: Originally published at BetaNews on August 12, 2025.