August 4, 2025
5 min read
Jordan Smith
Apiiro introduces AutoFix AI Agent to auto-remediate code and design risks in IDEs using runtime context, enhancing secure software development.
Apiiro Launches AutoFix AI to Fix Design and Code Risks
Apiiro launches AutoFix AI Agent to auto-remediate code and design risks in integrated development environments (IDEs) using runtime context, bridging AI coding and secure development. Agentic application security platform Apiiro is debuting AutoFix AI Agent, an industry-first AI agent that automatically fixes design and code risks using runtime context.Meeting developers where they are through MCP connection
The tool operates within a developer’s IDE without requiring plug-ins, leveraging a remote Model Context Protocol (MCP) connection. “We’re meeting developers where they are– in their IDEs with deep code-to-runtime context–and giving them the secure path forward without slowing them down,” said Moti Gindi, Chief Product Officer at Apiiro. “It’s about empowering developers to fix risks and not vulnerabilities– in real time, with the runtime context, software architecture, and organization policy.” Apiiro highlights the rise of AI coding assistants like GitHub Copilot, Gemini Code Assist, and Cursor as a key driver for this tool. These assistants often operate with limited or no runtime context and lack governance by existing security tools, which can lead to vulnerabilities, unvetted technologies, business logic risks, and code that bypasses organizational security policies and architectural standards.Risks found in AI-generated code
According to the Center for Security and Emerging Technologies (CSET) at Georgetown University’s Walsh School of Foreign Service, up to 50% of AI code assistants generate code containing vulnerabilities, with 10% of those being actively exploitable and having real business impact. CSET states that large language models (LLMs) and AI systems generating code pose direct and indirect cybersecurity risks, including insecure code generation, vulnerability to attacks and manipulation, and downstream impacts such as feedback loops affecting future AI training. “Evaluation benchmarks for code generation models often focus on the models’ ability to produce functional code, but do not assess their ability to generate secure code, which may incentivize a deprioritization of security over functionality during model training,” the report notes.Tool built for scale and reliability
The AutoFix AI Agent scales expertise across development teams by automatically generating threat models for risky feature requests before code is written. It fixes static application security testing (SAST), software composition analysis (SCA), secrets, and API security findings. The agent uses unique runtime context to make precise, risk-based decisions based on an organization’s software architecture, security policies, business impact, and risk acceptance lifecycle. “AI code assistants represent one of the most transformative productivity tools of our lifetime. But by focusing solely on code, they lack context– missing critical signals like security policies and standards, compensating controls, and business risk,” said Idan Plotnik, Co-founder and CEO of Apiiro. “This disconnect introduces significant risk to enterprises, as ungoverned AI coding tools are adopted faster than application security teams can keep up. Our AutoFix AI Agent doesn’t just detect issues– it intelligently fixes them using the same contextual understanding and organizational knowledge that application security and risk management teams rely on to make informed decisions.” The AutoFix AI Agent leverages data from Apiiro’s platform, which maps software architecture across all material changes, powered by Deep Code Analysis (DCA), Code-to-Runtime matching, and the Risk Graph engine. Core capabilities include:- AutoFix: Automatically fixes design and code risks with runtime context.
- AutoGovern: Enforces policies, standards, and secure coding guardrails automatically.
- AutoManage: Automates risk lifecycle management measurement across the software development lifecycle (SDLC). “In a world where AI generates code, no software should ship without an AI AppSec agent securing it,” said Plotnik. “We’re enabling security teams to unlock full developer productivity while automatically fixing the most critical risks to the business.”
- AI Agents Capabilities, Risks, and Growing Role
- AI-Driven Crypto Trading Tools Reshape Market Strategies in 2025
- How to Use Google Gemini for Smarter Crypto Trading
AI is impacting nearly every aspect of the channel, with more marketers adopting tools and strategies to leverage this technology.
Source: Originally published at Channel Insider on August 4, 2025.