Spread of AI Agents Sparks Fears of a Cybersecurity Crisis
A new report reveals an increasing trust gap between businesses deploying agentic AI for external communications and consumers wary of sharing personal information due to security concerns.
The research, carried out by
Censuswide for
Salt Security, warns that without proper API discovery, governance, and security, the very technology meant to drive smarter customer engagement could open the door to cybersecurity issues including attacks or data leakage.
Over half (53%) of organizations using agentic AI say they are already deploying it, or plan to, for customer-facing roles. Forty-eight percent currently use between six and 20 types of AI agents, and 19% deploy between 21 and 50. Thirty-seven percent of organizations report that one to 100 AI agents are currently active within their systems, and almost a fifth (18%) host between 501 and 1000 agents.
However, despite widespread use, only 32% say they conduct daily API risk assessments, and just 37% have a dedicated API security solution. The same percentage have a dedicated data privacy team overseeing AI initiatives.
On the consumer side, 64% have interacted with AI chatbots more frequently in the past year, and 80% of those have shared personal information during these interactions. Indeed, 44% say they’ve felt pressured to share information just to complete a task.
Only 22% of consumers are comfortable sharing data with AI agents, compared to 37% who trust interactions over the phone and 54% in person.
“Agentic AI is changing the way businesses operate, but consumers are clearly signaling a lack of confidence,” says Michael Callahan, CMO at Salt Security. “What many organizations overlook is that the safety and success of AI depends on APIs that power it and they must be effectively discovered, governed and secured. Otherwise, the trust gap will widen, and the risks will escalate.”
The full report, which includes recommended security actions, is available from the Salt Security website
here.
Image Credit: Twoapril Studio/Dreamstime.com
Source: BetaNews
Frequently Asked Questions (FAQ)
Consumer Trust and AI Agents
Q: Why are consumers hesitant to share personal information with AI agents?
A: Consumers are hesitant due to concerns about data security and privacy. The report indicates that only 22% of consumers are comfortable sharing data with AI agents, significantly less than their trust in phone or in-person interactions.
Q: Have consumers noticed an increase in AI chatbot interactions?
A: Yes, 64% of consumers have interacted with AI chatbots more frequently in the past year, with 80% of those sharing personal information during these interactions.
Q: Do consumers feel pressured to share data with AI agents?
A: Indeed, 44% of consumers report feeling pressured to share information simply to complete a task.
Business Adoption of AI Agents
Q: How many organizations are deploying or planning to deploy agentic AI in customer-facing roles?
A: Over half (53%) of organizations are either currently deploying or planning to deploy agentic AI for customer-facing roles.
Q: What is the typical number of AI agents organizations are using?
A: Many organizations are using multiple AI agents, with 48% using between six and 20 types, and 19% deploying between 21 and 50 types.
Q: What percentage of organizations are conducting daily API risk assessments?
A: Despite the widespread use of AI agents, only 32% of organizations report conducting daily API risk assessments.
Cybersecurity Risks and AI Agents
Q: What is the primary cybersecurity concern related to AI agent deployment?
A: The primary concern is that without proper API discovery, governance, and security, AI agents could expose businesses to cybersecurity issues such as attacks and data leakage.
Q: What percentage of organizations have a dedicated API security solution?
A: Only 37% of organizations have a dedicated API security solution in place to mitigate these risks.
Q: What percentage of organizations have a dedicated data privacy team overseeing AI initiatives?
A: Similar to API security, only 37% of organizations have a dedicated data privacy team overseeing their AI initiatives.
Crypto Market AI's Take
The increasing adoption of agentic AI agents by businesses, especially for customer-facing roles, presents a significant cybersecurity challenge. The report highlights a critical gap: while organizations are rapidly deploying AI, their investment in essential security measures like API risk assessments and dedicated data privacy teams lags behind. This oversight can lead to vulnerabilities that attackers can exploit, potentially resulting in data breaches and a loss of consumer trust. For businesses operating in the digital asset space, where trust and security are paramount, neglecting API security and data governance when implementing AI can have severe repercussions. Ensuring robust API security and data privacy is crucial for the safe and effective integration of AI in financial services, particularly when dealing with sensitive customer information.
More to Read: