AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds
cybersecurity

Leading AI Agents Like ChatGPT Are Vulnerable to Hacking, Security Firm Finds

Zenity Labs reveals critical security flaws in AI agents like ChatGPT, Copilot, Gemini, and Einstein that risk data breaches and manipulation.

August 13, 2025
5 min read
Gus Mallett

AI Agents Like ChatGPT Vulnerable to Hacking, Security Firm Finds

Some of the most widely-used AI agents and assistants in the world, including ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein, are vulnerable to being hijacked with little to no user interaction, new research from Zenity Labs claims. Hackers can easily gain access to and exfiltrate critical data, manipulate workflows, and even impersonate users with relative ease. Attackers could also gain memory persistence, granting long-term access and control to compromised data. These findings will concern technology leaders, especially since cybersecurity remains their top priority in 2025. Additionally, with many employees using AI tools secretly, the security gaps may be more widespread than senior leaders realize.

AI Agents “Highly Vulnerable” to Hacking, Research Shows

A new report from Zenity Labs highlights serious security weaknesses in popular AI agents. During a presentation at the Black Hat USA cybersecurity conference, researchers demonstrated how these platforms can be exploited by bad actors. Once hackers access these AI agents, they can:
  • Exfiltrate sensitive data
  • Manipulate workflows
  • Impersonate users
  • Potentially gain memory persistence for long-term control
  • Greg Zemlin, product marketing manager at Zenity Labs, explained:
    “They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behavior. This opens the door to sabotage, operational disruption, and long-term misinformation, especially in environments where agents are trusted to make or support critical decisions.”

    Findings Shed Light on Numerous Security Loopholes

    Zenity Labs investigated zero-click exploits compromising leading AI agents. Key findings include:
  • ChatGPT can be hacked via email-based prompt injection, granting attackers access to connected Google Drive accounts.
  • Copilot leaked entire CRM databases through its customer-support agent.
  • Einstein can be manipulated to reroute customer communications to different email accounts, exposing login information.
  • Both Gemini and Copilot can be manipulated to target users with social-engineering attacks.
  • After discovering these vulnerabilities, Zenity Labs notified the affected companies, which patched the flaws and implemented safeguards. A Google spokesperson emphasized the importance of a layered defense strategy against prompt injection attacks. However, recent incidents like the Salesforce CRM data breach show the ongoing risks.

    Companies Must Act Now to Avert Catastrophe

    As AI agents become staples in modern workplaces, businesses are investing heavily in AI strategies. However, only 27% of companies have policies limiting the type of data shared with AI models, according to Tech.co’s The Impact of Technology on the Workplace report. This combination of insufficient safeguards and inherent AI vulnerabilities puts companies at risk of becoming the next data breach statistic. Businesses must urgently implement strict governance policies and security measures to protect sensitive data and maintain trust.

    Frequently Asked Questions (FAQ)

    AI Agent Security

    Q: What are AI agents and why are they vulnerable? A: AI agents are AI-powered tools like ChatGPT, Copilot, and Gemini that can perform tasks and interact with users. They are vulnerable to hacking because attackers can exploit vulnerabilities with minimal user interaction, allowing them to steal data, manipulate workflows, or impersonate users. Q: What specific vulnerabilities were found in popular AI agents? A: Zenity Labs research highlighted vulnerabilities such as email-based prompt injection in ChatGPT, which could grant access to connected Google Drive accounts. Copilot was found to leak entire CRM databases via its customer-support agent, and Salesforce's Einstein could be manipulated to reroute communications and expose login information. Gemini and Copilot were also found to be susceptible to social engineering attacks. Q: What are the potential consequences of these AI agent vulnerabilities? A: Consequences include unauthorized access to sensitive data, manipulation of critical workflows, user impersonation, and even memory persistence which grants attackers long-term control over compromised systems. This can lead to operational disruption, misinformation, and sabotage. Q: How can businesses protect their AI agents from these threats? A: Businesses need to implement strict governance policies and robust security measures. This includes limiting the type of data shared with AI models, regularly updating AI tools, and staying informed about emerging vulnerabilities. Q: Are there any specific examples of AI agents being compromised? A: The research pointed to specific instances where ChatGPT could be exploited via email prompt injection, potentially accessing Google Drive. Copilot was noted for leaking CRM databases, and Einstein for rerouting customer communications.

    Crypto Market AI's Take

    This report on AI agent vulnerabilities underscores a critical aspect of the current technological landscape: the intersection of AI advancement and cybersecurity. As AI agents become increasingly integrated into business operations, their security posture becomes paramount. At Crypto Market AI, we recognize that sophisticated AI tools, while offering significant efficiency gains, also introduce new attack vectors. Our platform leverages AI for market analysis and trading, but we prioritize robust security protocols and continuous monitoring to mitigate risks. Understanding these vulnerabilities is crucial for ensuring the safe and effective deployment of AI in sensitive financial contexts. For businesses looking to enhance their cybersecurity posture in the age of AI, exploring advanced threat detection and secure data handling practices is essential.

    More to Read:

  • AI Agents: The Future of Business Automation or a Security Nightmare?
  • Protecting Your Crypto Assets: A Guide to Secure Wallets
  • The Growing Threat of AI-Powered Scams in the Crypto Space
Originally published at Tech.co on Tue, 12 Aug 2025.