August 10, 2025
5 min read
The Hacker News
Researchers Reveal GPT-5 Jailbreak and Zero-Click AI Agent Attacks Threatening Cloud and IoT Security
Cybersecurity researchers have uncovered a sophisticated jailbreak technique that bypasses the ethical guardrails implemented by OpenAI in its latest large language model (LLM), GPT-5, enabling the generation of illicit instructions. Generative AI security platform NeuralTrust combined a known method called Echo Chamber with narrative-driven steering to trick GPT-5 into producing undesirable responses."We use Echo Chamber to seed and reinforce a subtly poisonous conversational context, then guide the model with low-salience storytelling that avoids explicit intent signaling," said security researcher MartĂ JordĂ . "This combination nudges the model toward the objective while minimizing triggerable refusal cues."Echo Chamber, first detailed in June 2025, is a jailbreak approach that deceives LLMs into responding to prohibited topics using indirect references, semantic steering, and multi-step inference. Recently, it has been paired with a multi-turn jailbreaking technique called Crescendo to bypass defenses in xAI's Grok 4. In the latest GPT-5 attack, researchers demonstrated how harmful procedural content can be elicited by framing it within a story context. Instead of directly requesting malicious instructions (e.g., for creating Molotov cocktails), the AI is prompted to generate sentences containing specific keywords such as "cocktail," "story," "survival," "molotov," "safe," and "lives." The model is then iteratively steered to expand on these themes without overtly stating malicious intent. This attack unfolds as a "persuasion" loop within a conversational context, gradually guiding the model along a path that minimizes refusal triggers and allows the narrative to progress without explicit malicious prompts.
"This progression shows Echo Chamber's persuasion cycle at work: the poisoned context is echoed back and gradually strengthened by narrative continuity," JordĂ explained. "The storytelling angle functions as a camouflage layer, transforming direct requests into continuity-preserving elaborations."
"This reinforces a key risk: keyword or intent-based filters are insufficient in multi-turn settings where context can be gradually poisoned and then echoed back under the guise of continuity."Meanwhile, SPLX's testing of GPT-5 revealed that the raw, unguarded model is "nearly unusable for enterprise out of the box," with GPT-4o outperforming GPT-5 on hardened benchmarks.
"Even GPT-5, with all its new 'reasoning' upgrades, fell for basic adversarial logic tricks," said Dorian Granoša. "OpenAI's latest model is undeniably impressive, but security and alignment must still be engineered, not assumed."As AI agents and cloud-based LLMs become more prevalent in critical environments, enterprises face emerging risks such as prompt injections (aka promptware) and jailbreaks that could lead to data theft and severe consequences. AI security company Zenity Labs disclosed a new set of attacks named AgentFlayer, where ChatGPT Connectors—such as those for Google Drive—can be weaponized to trigger zero-click attacks. These attacks exfiltrate sensitive data like API keys stored in cloud services by embedding indirect prompt injections within seemingly innocuous documents uploaded to AI chatbots. Other zero-click attacks include:
- A malicious Jira ticket causing Cursor AI code editor to exfiltrate secrets from repositories or local file systems when integrated with Jira Model Context Protocol (MCP).
- A crafted email targeting Microsoft Copilot Studio that injects prompts to deceive custom agents into leaking valuable data.
- Understanding AI Agent Vulnerabilities: A Deep Dive
- The Future of AI in Cybersecurity: Trends and Predictions
- Navigating the Crypto Market: Essential Guides for Investors
"The AgentFlayer zero-click attack is a subset of the same EchoLeak primitives," said Itay Ravia, head of Aim Labs. "These vulnerabilities are intrinsic and we will see more of them in popular agents due to poor understanding of dependencies and the need for guardrails. Importantly, Aim Labs already has deployed protections available to defend agents from these types of manipulations."These attacks highlight how indirect prompt injections can impact generative AI systems and spill over into real-world consequences. Connecting AI models to external systems increases the attack surface exponentially, introducing new security vulnerabilities and untrusted data risks. Trend Micro's State of AI Security Report for H1 2025 emphasized:
"Countermeasures like strict output filtering and regular red teaming can help mitigate the risk of prompt attacks, but the way these threats have evolved in parallel with AI technology presents a broader challenge in AI development: implementing features or capabilities that strike a delicate balance between fostering trust in AI systems and keeping them secure."Earlier this week, researchers from Tel-Aviv University, Technion, and SafeBreach demonstrated how prompt injections could hijack smart home systems using Google's Gemini AI. Attackers could manipulate devices like internet-connected lights, smart shutters, and boilers via poisoned calendar invites. Another zero-click attack described by Straiker revealed that the "excessive autonomy" of AI agents—their ability to act, pivot, and escalate—can be exploited to stealthily access and leak data without user interaction.
"These attacks bypass classic controls: no user click, no malicious attachment, no credential theft," said researchers Amanda Rousseau, Dan Regalado, and Vinay Kumar Pidathala. "AI agents bring huge productivity gains, but also new, silent attack surfaces."