AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds
cybersecurity

Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds

Research reveals ChatGPT, Copilot, Gemini, and Einstein AI agents have critical security flaws risking data breaches and manipulation.

August 13, 2025
5 min read
Gus Mallett

AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds

Some of the most widely-used AI agents and assistants in the world, including ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein, are vulnerable to being hijacked with little to no user interaction, new research from Zenity Labs claims. Reportedly, hackers can easily gain access to and exfiltrate critical data, manipulate workflows, and even impersonate users with relative ease. It’s understood that attackers could also gain memory persistence, which essentially grants long-term access and control to compromised data. The findings will concern technology chiefs everywhere, who have already indicated that cybersecurity is their top concern in 2025. And with a lot of employees using AI in secret, its security gaps may be more numerous than many senior leaders think.

AI Agents “Highly Vulnerable” to Hacking, Research Shows

A new report from Zenity Labs outlines how some of the most popular AI agents on the market are vulnerable to exploitation by bad actors. During a presentation at the Black Hat USA cybersecurity conference, researchers revealed that the platforms in question all demonstrated serious security weaknesses. They showed that once hackers get access to these AI agents, they can exfiltrate sensitive data, manipulate workflows, and potentially even impersonate users. It is thought that they may even be able to gain memory persistence, which would give them long-term control and access. Greg Zemlin, product marketing manager at Zenity Labs, said:
“They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behavior. This opens the door to sabotage, operational disruption, and long-term misinformation, especially in environments where agents are trusted to make or support critical decisions.”

Findings Shed Light on Numerous Security Loopholes

Zenity Labs set out to establish how attackers could utilize zero-click exploits to compromise leading AI agents. Among the findings, the company concluded that:
  • ChatGPT can be hacked with an email-based prompt injection, giving attackers access to connected Google Drive accounts.
  • Copilot leaked entire CRM databases through its customer-support agent.
  • Einstein can be manipulated to reroute customer communications to different email accounts, giving attackers access to login information.
  • Both Gemini and Copilot can be manipulated into targeting users with social-engineering attacks.
  • Upon discovering these vulnerabilities, Zenity Labs notified the companies concerned, which acted to patch the flaws and introduce long-term safeguards to ensure that the problems don’t recur. A spokesperson for Google stated: “Having a layered defense strategy against prompt injection attacks is crucial.” Unfortunately, that wasn’t enough to deter a recent data breach through the Salesforce CRM.

    Companies Must Act Now to Avert Catastrophe

    The findings from Zenity Labs will certainly ruffle some feathers in the AI world. Increasingly, AI agents are becoming a staple of the modern workplace, with companies investing heavily in their strategies and employees right across the business leveraging the latest tools to streamline their operations. In our report, The Impact of Technology on the Workplace, we spoke to professionals across the business sector to get a better idea of how technology was shaping their working habits. Among our findings, we learned that just 27% of businesses had implemented policies to strictly limit the kind of data that can be shared with AI models. It’s a worrying combination: not only are companies failing to introduce appropriate safeguards, but the AI tools themselves have obvious security vulnerabilities. With adoption continuing apace, businesses everywhere face a race against time to bed in strict governance policies — or they risk ending up as another data breach statistic.
    Source: Leading AI Agents at Risk of Hacks - Tech.co

    Frequently Asked Questions (FAQ)

    AI Agent Security and Vulnerabilities

    Q: What specific vulnerabilities were found in AI agents like ChatGPT? A: Researchers discovered vulnerabilities that allow for prompt injection attacks, enabling hackers to gain access to connected accounts (like Google Drive), exfiltrate sensitive CRM data, reroute customer communications, and conduct social engineering attacks. Q: Can AI agents be hijacked with minimal user interaction? A: Yes, the research indicates that AI agents can be compromised with "little to no user interaction," often through methods like email-based prompt injection. Q: What is "memory persistence" in the context of AI agent hacking? A: Memory persistence means that attackers can maintain long-term access and control over compromised data within the AI agent even after the initial exploit. Q: Which major AI agents were found to be vulnerable? A: The research identified vulnerabilities in widely used AI agents such as ChatGPT, Microsoft Copilot, Google Gemini, and Salesforce's Einstein. Q: What are the potential consequences of these AI agent vulnerabilities for businesses? A: Consequences include data exfiltration, manipulation of workflows, impersonation of users, sabotage, operational disruption, and the spread of misinformation.

    Crypto Market AI's Take

    This recent research on AI agent vulnerabilities highlights a critical area of concern for businesses integrating AI into their operations. At Crypto Market AI, we understand the importance of robust security in any technology, especially those handling sensitive data. Our focus on AI in the financial sector means we are keenly aware of the need for secure and reliable AI solutions. We provide insights into how AI can be used for market analysis and trading, but always emphasize the importance of secure implementation and understanding potential risks. To learn more about how AI is being integrated into financial services and the associated security considerations, explore our resources on AI-powered crypto trading bots and our comprehensive AI analysts guide.

    More to Read:

  • The Impact of Technology on the Workplace
  • Businesses' Top Concern in 2025 is Cybersecurity
  • Gen Z is Most Likely to Use AI in Secret