AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Bad code, malicious models and rogue agents: Cybersecurity researchers scramble to prevent AI exploits
ai-security

Bad code, malicious models and rogue agents: Cybersecurity researchers scramble to prevent AI exploits

Cybersecurity experts tackle vulnerabilities in AI code, models, and agents as new AI-driven threats emerge rapidly.

August 8, 2025
5 min read
@SiliconANGLE
The cybersecurity industry is in a state of high alert, grappling with a rapidly evolving threat landscape shaped by artificial intelligence. From vulnerabilities in coding tools to malicious injections into AI models and unmonitored agents operating within critical infrastructure, the challenges are significant and emerging with unprecedented speed. The sheer pace of change means that understanding the full impact of AI on cybersecurity is a daunting task for many. Black Hat USA this year highlighted how AI is fundamentally altering the cybersecurity domain. Researchers are identifying new classes of vulnerabilities, with a particular focus on code generated by autonomous AI agents. Gary Marcus, a cognitive scientist and AI company founder, noted that these systems are excellent mimics but lack an inherent understanding of secure coding practices, leading to the creation of "lots of bad code." An example of this concern was presented by Nvidia researchers regarding the Cursor AI code editor, where an auto-run mode allowed agents to execute commands without explicit user permission, a vulnerability that has since been addressed by the Cursor team with a user-configurable disable feature. The proliferation of AI is also leading to a dramatic increase in Application Programming Interface (API) endpoints. Chuck Herrin, Field Chief Information Security Officer at F5 Inc., stated that companies utilizing generative AI have, on average, five times more API endpoints, significantly expanding the attack surface. Herrin emphasized that securing AI necessitates the rigorous security of these interfaces. The complexity is further compounded by the reliance on components like vector databases, training frameworks, and inference servers. A significant vulnerability discovered in the Nvidia Container Toolkit by Wiz Inc. researchers could expose customer data and proprietary models in a substantial portion of cloud environments, underscoring the critical need for robust infrastructure security in AI. The widespread adoption of large language models (LLMs) is a key driver of AI's expanding use, with Meta's Llama model alone reaching a billion downloads. However, security controls for these models have lagged behind their popularity. Malcolm Harkins, Chief Security and Trust Officer at HiddenLayer Inc., pointed out that the current $300 billion spent on information security does not adequately protect AI models, leaving them vulnerable to exploitation due to a lack of mitigation strategies. This vulnerability extends to AI agents that rely on LLMs, as adversarial manipulation of these models can lead to attackers gaining control over critical agent functions. While major model repositories have responded to identified vulnerabilities, there's a perceived lack of proactive vetting for malicious code within their inventories. The security concerns extend to agentic AI, a rapidly growing sector. A report by Coalfire Inc. demonstrated a 100% success rate in hacking agentic AI applications using adversarial prompts, leading to data leakage and compromise. Apostol Vassilev of NIST warned that agents interacting with cyber infrastructure pose significant risks, advising that this technology should only be exposed to assets and data that organizations are prepared to lose. Despite these concerns, organizations like Simbian Inc. are leveraging AI security operations center agents to enhance threat containment and decision-making. Addressing the "identity problem" for AI agents is a critical focus, with solutions like "confidential computing" and Trusted Execution Environments (TEEs) being explored. Ayal Yogev, co-founder and CEO of Anjuna Security Inc., highlighted this approach, noting its adoption by major financial institutions. He stressed that ensuring an agent's permissions do not exceed those of the user is paramount. The dynamic interplay between AI's rapid advancement and cybersecurity demands a disciplined and accelerated approach to vulnerability identification and remediation, with a constant reminder from industry leaders that in the realm of AI, trust must be earned, not assumed.

Frequently Asked Questions (FAQ)

Understanding AI in Cybersecurity

Q: How is AI changing the cybersecurity landscape? A: AI is introducing new types of vulnerabilities, particularly in code generation and AI model integrity. It's also increasing the attack surface through more API endpoints and creating new challenges in securing AI agents and their underlying models. Q: What are the primary coding-related vulnerabilities introduced by AI? A: AI-generated code can be insecure due to a lack of understanding of secure coding practices by the AI models themselves, and the tendency to use AI coding tools for shortcuts without considering security implications. Q: How are AI agents creating new security risks? A: AI agents can operate without security protection, move across critical infrastructure, and, if the AI models they rely on are compromised, attackers can potentially control them.

AI Model Security

Q: Why are Large Language Models (LLMs) a particular concern for security? A: LLMs are popular and widely downloaded, but security controls for them have not kept pace, making them exploitable. Adversarial manipulation of LLMs can lead to attackers controlling AI agents that depend on them. Q: What are the challenges in securing AI models and repositories? A: AI models are vulnerable due to a lack of specific mitigation strategies against common exploitation techniques. Repositories storing these models also face risks of breaches, and there's a concern about the lack of proactive vetting for malicious code within model inventories.

Agentic AI Security

Q: What are the security concerns surrounding agentic AI? A: Agentic AI applications are proving to be vulnerable to hacking through adversarial prompts, leading to data leakage and compromise. Agents interacting with critical infrastructure also pose risks if not adequately secured. Q: What is confidential computing and how does it relate to AI agent security? A: Confidential computing utilizes Trusted Execution Environments (TEEs) to secure data processing for AI agents, ensuring that code executes safely within a protected area of the processor, which helps address the "identity problem" for agents.

Broader AI Security Implications

Q: What is the biggest mistake to avoid in AI security? A: As stated by industry experts, the primary rule is not to blindly "trust" AI, emphasizing the need for continuous verification and security measures. Q: What is the overarching sentiment regarding AI's impact on cybersecurity? A: While AI offers solutions, there's a nagging concern that the technology itself might become ungovernable, requiring a disciplined and controlled approach from the cybersecurity community to stay ahead of emergent threats.

Crypto Market AI's Take

The cybersecurity industry's struggle to keep pace with AI-driven threats is a critical area that resonates deeply with our mission at Crypto Market AI. We understand that as AI capabilities expand, so too does the potential attack surface, especially within the digital asset space. Our platform leverages AI not only for market intelligence and trading but also to understand the evolving threat landscape. We are particularly focused on how advancements in AI, including AI agents, can be both a source of risk and a powerful tool for defense. Ensuring the security of our platform and our users' assets is paramount, and we continuously integrate the latest AI and cybersecurity insights to achieve this.

More to Read:

Source: Originally published at SiliconANGLE on Fri, 08 Aug 2025 16:10:32 GMT