AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds
cybersecurity

Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds

Research reveals serious security flaws in AI agents like ChatGPT, Copilot, Gemini, and Einstein, risking data breaches and workflow manipulation.

August 13, 2025
5 min read
Gus Mallett

AI Agents Like ChatGPT Found Vulnerable to Hacking, Security Firm Warns

Some of the most widely-used AI agents and assistants in the world, including ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein, are vulnerable to being hijacked with little to no user interaction, new research from Zenity Labs claims. Reportedly, hackers can easily gain access to and exfiltrate critical data, manipulate workflows, and even impersonate users with relative ease. It’s understood that attackers could also gain memory persistence, which essentially grants long-term access and control to compromised data. The findings will concern technology chiefs everywhere, who have already indicated that cybersecurity is their top concern in 2025. And with a lot of employees using AI in secret, its security gaps may be more numerous than many senior leaders think.

AI Agents “Highly Vulnerable” to Hacking, Research Shows

A new report from Zenity Labs outlines how some of the most popular AI agents on the market are vulnerable to exploitation by bad actors. During a presentation at the Black Hat USA cybersecurity conference, researchers revealed that the platforms in question all demonstrated serious security weaknesses. They showed that once hackers get access to these AI agents, they can exfiltrate sensitive data, manipulate workflows, and potentially even impersonate users. It is thought that they may even be able to gain memory persistence, which would give them long-term control and access. Greg Zemlin, product marketing manager at Zenity Labs, said:
“They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behavior. This opens the door to sabotage, operational disruption, and long-term misinformation, especially in environments where agents are trusted to make or support critical decisions.”

Findings Shed Light on Numerous Security Loopholes

Zenity Labs set out to establish how attackers could utilize zero-click exploits to compromise leading AI agents. Among the findings, the company concluded that:
  • ChatGPT can be hacked with an email-based prompt injection, giving attackers access to connected Google Drive accounts.
  • Copilot leaked entire CRM databases through its customer-support agent.
  • Einstein can be manipulated to reroute customer communications to different email accounts, giving attackers access to login information.
  • Both Gemini and Copilot can be manipulated into targeting users with social-engineering attacks.
  • Upon discovering these vulnerabilities, Zenity Labs notified the companies concerned, which acted to patch the flaws and introduce long-term safeguards to ensure that the problems don’t recur. A spokesperson for Google stated: “Having a layered defense strategy against prompt injection attacks is crucial.” Unfortunately, that wasn’t enough to deter a recent data breach through the Salesforce CRM.

    Companies Must Act Now to Avert Catastrophe

    The findings from Zenity Labs will certainly ruffles some feathers in the AI world. Increasingly, AI agents are becoming a staple of the modern workplace, with companies investing heavily in their strategies and employees right across the business leveraging the latest tools to streamline their operations. In our report, The Impact of Technology on the Workplace, we spoke to professionals across the business sector to get a better idea of how technology was shaping their working habits. Among our findings, we learned that just 27% of businesses had implemented policies to strictly limit the kind of data that can be shared with AI models. It’s a worrying combination: not only are companies failing to introduce appropriate safeguards, but the AI tools themselves have obvious security vulnerabilities. With adoption continuing apace, businesses everywhere face a race against time to bed in strict governance policies — or they risk ending up as another data breach statistic.
    Source: Originally published at Tech.co on Tue, 12 Aug 2025.

    Frequently Asked Questions (FAQ)

    AI Agent Security

    Q: What are the main vulnerabilities found in popular AI agents? A: Research indicates that AI agents like ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein are vulnerable to hijacking with little to no user interaction. Hackers can potentially gain access to and exfiltrate critical data, manipulate workflows, impersonate users, and achieve memory persistence for long-term control. Q: How are these AI agents being exploited? A: Specific exploits mentioned include email-based prompt injection for ChatGPT, leading to access to connected Google Drive accounts. Copilot has been found to leak CRM databases, and both Gemini and Copilot can be manipulated for social-engineering attacks. Salesforce's Einstein can be used to reroute customer communications and compromise login information. Q: What is memory persistence in the context of AI agent hacking? A: Memory persistence refers to a state where a hacker gains long-term access and control over the data within a compromised AI agent, even after initial interactions might have ended. Q: What are the potential consequences of these AI agent vulnerabilities? A: Consequences can include the exfiltration of sensitive data, manipulation of business workflows, impersonation of users, sabotage, operational disruption, and the spread of misinformation. Q: What steps are companies taking to address these vulnerabilities? A: Following the discovery of these flaws, the companies involved have been notified and have taken action to patch the vulnerabilities and implement safeguards against future recurrences.

    Cybersecurity in the Age of AI

    Q: Is cybersecurity a top concern for businesses in 2025? A: Yes, cybersecurity is consistently cited as a top concern for technology leaders in 2025. Q: How does the secret use of AI by employees impact security? A: The secret usage of AI by employees can create numerous security gaps that senior leaders may not be aware of, potentially exacerbating the risks associated with AI tool vulnerabilities. Q: What is prompt injection? A: Prompt injection is a type of cyber attack where malicious input is crafted to manipulate an AI model's output, potentially causing it to perform unintended actions or reveal sensitive information.

    Crypto Market AI's Take

    The findings from Zenity Labs underscore a critical point that we at Crypto Market AI frequently highlight: the rapid advancement of AI technologies, while revolutionary, also introduces new and complex security challenges. As AI agents become more integrated into business operations, their susceptibility to exploitation directly impacts data security and operational integrity. This concern for cybersecurity is paramount, especially in the financial sector where Crypto Market AI operates. Our platform leverages AI for market analysis and trading, and therefore, we are deeply invested in ensuring the robust security of our AI systems and user data. For businesses looking to harness the power of AI while mitigating risks, understanding these vulnerabilities and implementing strong security protocols is essential. For those interested in how AI is shaping financial markets and the associated security considerations, our insights into AI-powered trading strategies and the security protocols in cryptocurrency offer valuable context.

    More to Read:

  • The Impact of Technology on the Workplace
  • AI Agents: Opportunities and Risks
  • Understanding Prompt Injection Attacks
Source: Originally published at Tech.co on Tue, 12 Aug 2025.