August 13, 2025
5 min read
Gus Mallett
AI Agents Like ChatGPT Vulnerable to Hacking, Security Firm Finds
Some of the most widely-used AI agents and assistants in the world, including ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein, are vulnerable to being hijacked with little to no user interaction, new research from Zenity Labs claims. Hackers can easily gain access to and exfiltrate critical data, manipulate workflows, and even impersonate users with relative ease. Attackers could also gain memory persistence, granting long-term access and control to compromised data. These findings will concern technology leaders, especially since cybersecurity remains their top priority in 2025. Additionally, with many employees using AI tools secretly, the security gaps may be more widespread than senior leaders realize.AI Agents “Highly Vulnerable” to Hacking, Research Shows
A new report from Zenity Labs highlights serious security weaknesses in popular AI agents. During a presentation at the Black Hat USA cybersecurity conference, researchers demonstrated how these platforms can be exploited by bad actors. Once hackers access these AI agents, they can:- Exfiltrate sensitive data
- Manipulate workflows
- Impersonate users
- Potentially gain memory persistence for long-term control Greg Zemlin, product marketing manager at Zenity Labs, explained:
- ChatGPT can be hacked via email-based prompt injection, granting attackers access to connected Google Drive accounts.
- Copilot leaked entire CRM databases through its customer-support agent.
- Einstein can be manipulated to reroute customer communications to different email accounts, exposing login information.
- Both Gemini and Copilot can be manipulated to target users with social-engineering attacks. After discovering these vulnerabilities, Zenity Labs notified the affected companies, which patched the flaws and implemented safeguards. A Google spokesperson emphasized the importance of a layered defense strategy against prompt injection attacks. However, recent incidents like the Salesforce CRM data breach show the ongoing risks.
- AI Agents: The Future of Business Automation or a Security Nightmare?
- Protecting Your Crypto Assets: A Guide to Secure Wallets
- The Growing Threat of AI-Powered Scams in the Crypto Space
“They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behavior. This opens the door to sabotage, operational disruption, and long-term misinformation, especially in environments where agents are trusted to make or support critical decisions.”