AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds
ai-vulnerabilities

Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds

Zenity Labs reveals serious security flaws in ChatGPT, Copilot, Gemini, and Einstein that enable hackers to steal data and manipulate workflows.

August 13, 2025
5 min read
Gus Mallett

AI Agents Like ChatGPT and Copilot Vulnerable to Hacking, Zenity Labs Warns

Some of the most widely-used AI agents and assistants in the world, including ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein, are vulnerable to being hijacked with little to no user interaction, new research from Zenity Labs claims. Reportedly, hackers can easily gain access to and exfiltrate critical data, manipulate workflows, and even impersonate users, with relative ease. It’s understood that attackers could also gain memory persistence, which essentially grants long-term access and control to compromised data. The findings will concern technology chiefs everywhere, who have already indicated that cybersecurity is their top concern in 2025. And with a lot of employees using AI in secret, its security gaps may be more numerous than many senior leaders think.

AI Agents “Highly Vulnerable” to Hacking, Research Shows

A new report from Zenity Labs outlines how some of the most popular AI agents on the market are vulnerable to exploitation by bad actors. During a presentation at the Black Hat USA cybersecurity conference, researchers revealed that the platforms in question all demonstrated serious security weaknesses. They showed that once hackers get access to these AI agents, they can exfiltrate sensitive data, manipulate workflows, and potentially even impersonate users. It is thought that they may even be able to gain memory persistence, which would give them long-term control and access. Greg Zemlin, product marketing manager at Zenity Labs, said: “They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behavior. This opens the door to sabotage, operational disruption, and long-term misinformation, especially in environments where agents are trusted to make or support critical decisions.”

Findings Shed Light on Numerous Security Loopholes

Zenity Labs set out to establish how attackers could utilize zero-click exploits to compromise leading AI agents. Among the findings, the company concluded that:
  • ChatGPT can be hacked with an email-based prompt injection, giving attackers access to connected Google Drive accounts.
  • Copilot leaked entire CRM databases through its customer-support agent.
  • Einstein can be manipulated to reroute customer communications to different email accounts, giving attackers access to login information.
  • Both Gemini and Copilot can be manipulated into targeting users with social-engineering attacks.
  • Upon discovering these vulnerabilities, Zenity Labs notified the companies concerned, which acted to patch the flaws and introduce long-term safeguards to ensure that the problems don’t recur. A spokesperson for Google stated: “Having a layered defense strategy against prompt injection attacks is crucial.” Unfortunately, that wasn’t enough to deter a recent data breach through the Salesforce CRM.

    Companies Must Act Now to Avert Catastrophe

    The findings from Zenity Labs will certainly ruffles some feathers in the AI world. Increasingly, AI agents are becoming a staple of the modern workplace, with companies investing heavily in their strategies and employees right across the business leveraging the latest tools to streamline their operations. In our report, The Impact of Technology on the Workplace, we spoke to professionals across the business sector to get a better idea of how technology was shaping their working habits. Among our findings, we learned that just 27% of businesses had implemented policies to strictly limit the kind of data that can be shared with AI models. It’s a worrying combination: not only are companies failing to introduce appropriate safeguards, but the AI tools themselves have obvious security vulnerabilities. With adoption continuing apace, businesses everywhere face a race against time to bed in strict governance policies — or they risk ending up as another data breach statistic.
    Source: Leading AI Agents at Risk of Hacks - Tech.co

    Frequently Asked Questions (FAQ)

    AI Agent Vulnerabilities and Security

    Q: What specific vulnerabilities were found in popular AI agents like ChatGPT and Copilot? A: Research from Zenity Labs indicates that these AI agents are vulnerable to hijacking with minimal user interaction. Specific examples include ChatGPT being vulnerable to email-based prompt injection allowing access to Google Drive accounts, Copilot leaking CRM databases, and Einstein being manipulated to reroute customer communications. Q: What kind of malicious activities can hackers perform once they gain access to these AI agents? A: Hackers can gain access to and exfiltrate critical data, manipulate workflows, impersonate users, and potentially achieve memory persistence, granting long-term control and access. Q: Are companies aware of these AI security gaps? A: Cybersecurity is a top concern for technology chiefs in 2025. However, with many employees using AI in secret, the extent of these security gaps may be underestimated by senior leadership. Q: What measures are being taken to address these vulnerabilities? A: Zenity Labs has notified the companies concerned, and they have reportedly acted to patch the identified flaws and implement long-term safeguards to prevent recurrence. Google has emphasized the importance of a layered defense strategy against prompt injection attacks. Q: How can businesses better protect themselves against AI agent vulnerabilities? A: The article suggests that companies need to implement strict governance policies and limit the type of data shared with AI models. Only 27% of businesses have such policies in place, indicating a significant area for improvement.

    Crypto Market AI's Take

    This report highlights a critical concern for the burgeoning field of AI agents, which are increasingly integrated into business operations. The vulnerabilities detailed by Zenity Labs underscore the immediate need for robust cybersecurity measures within AI development and deployment. At Crypto Market AI, we are committed to providing secure and intelligent solutions. Our platform leverages AI for market analysis and trading, and we prioritize the security of our users' data and assets. Understanding these vulnerabilities is crucial as AI becomes more pervasive, and we strive to stay ahead of potential threats through continuous research and development. Explore our insights on AI agents in finance to learn more about how AI is shaping the financial landscape securely and efficiently.

    More to Read:

  • AI Agents: Are They Broken? Can GPT-5 Fix Them?
  • The Impact of Technology on the Workplace
  • Data Breaches: An Updated List of Major Incidents