August 4, 2025
5 min read
Jordan Smith
Apiiro introduces AutoFix AI Agent to auto-remediate code and design risks in IDEs using runtime context, enhancing secure development.
Apiiro Launches AutoFix AI Agent to Automatically Remediate Code and Design Risks in IDEs
Agentic application security platform Apiiro has introduced AutoFix AI Agent, an industry-first AI tool designed to automatically fix design and code risks by leveraging runtime context.Meeting Developers Where They Are Through MCP Connection
The AutoFix AI Agent operates directly within developers’ integrated development environments (IDEs) without requiring plug-ins, using a remote Model Context Protocol (MCP) connection. “We’re meeting developers where they are—in their IDEs with deep code-to-runtime context—and giving them the secure path forward without slowing them down,” said Moti Gindi, Chief Product Officer at Apiiro. “It’s about empowering developers to fix risks and not vulnerabilities—in real time, with the runtime context, software architecture, and organization policy.” Apiiro highlights the rapid growth of AI coding assistants such as GitHub Copilot, Gemini Code Assist, and Cursor as a key motivator for developing this tool. These AI assistants often operate with limited or no runtime context and lack governance by existing security tools, which can introduce vulnerabilities, unvetted technologies, business logic risks, and code that bypasses organizational security policies and architectural standards.Risks Found in AI-Generated Code
According to the Center for Security and Emerging Technologies (CSET) at Georgetown University’s Walsh School of Foreign Service, up to 50% of AI code assistants generate code containing vulnerabilities, with 10% of those vulnerabilities being actively exploitable and having true business impact. CSET states that large language models (LLMs) and other AI systems pose direct and indirect cybersecurity risks by generating insecure code, being vulnerable to attack and manipulation, and causing downstream cybersecurity impacts such as feedback loops affecting future AI training. The report emphasizes that while evaluation benchmarks for code generation models focus on functionality, they often neglect security, potentially deprioritizing secure code generation during model training.Tool Built for Scale and Reliability
The AutoFix AI Agent scales expertise across development teams by automatically generating threat models for risky feature requests before any code is written. It also fixes findings related to Static Application Security Testing (SAST), Software Composition Analysis (SCA), secrets, and API security. Leveraging unique runtime context, the agent makes precise, risk-based decisions by understanding each organization’s software architecture, security policies, business impact, and risk acceptance lifecycle. This enables it to deliver autofixes aligned with enterprise standards rather than generic solutions. “AI code assistants represent one of the most transformative productivity tools of our lifetime. But by focusing solely on code, they lack context—missing critical signals like security policies and standards, compensating controls, and business risk,” said Idan Plotnik, Co-founder and CEO of Apiiro. “This disconnect introduces significant risk to enterprises, as ungoverned AI coding tools are adopted faster than application security teams can keep up. Our AutoFix AI Agent doesn’t just detect issues—it intelligently fixes them using the same contextual understanding and organizational knowledge that application security and risk management teams rely on to make informed decisions.” The AutoFix AI Agent utilizes data from Apiiro’s platform that maps software architecture across all material changes, powered by Deep Code Analysis (DCA), Code-to-Runtime matching, and the Risk Graph engine. Core capabilities include:- AutoFix: Automatically fixes design and code risks with runtime context.
- AutoGovern: Enforces policies, standards, and secure coding guardrails automatically.
- AutoManage: Automates risk lifecycle management and measurement across the software development lifecycle (SDLC). “In a world where AI generates code, no software should ship without an AI AppSec agent securing it,” said Plotnik. “We’re enabling security teams to unlock full developer productivity while automatically fixing the most critical risks to the business.”
- AI-Driven Crypto Trading Bots: The Future of Investment?
- Understanding Blockchain Security: A Comprehensive Guide
- The Impact of AI on Cybersecurity
AI is impacting nearly every aspect of the channel, with more channel marketers adopting the technology to equip themselves with essential tools and strategies. For more information, visit the original article at Channel Insider.