AI Market Logo
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
Apiiro Launches AutoFix AI to Fix Design and Code Risks
application-security

Apiiro Launches AutoFix AI to Fix Design and Code Risks

Apiiro introduces AutoFix AI Agent to auto-remediate code and design risks in IDEs using runtime context, enhancing secure development.

August 4, 2025
5 min read
Jordan Smith

Apiiro introduces AutoFix AI Agent to auto-remediate code and design risks in IDEs using runtime context, enhancing secure development.

Apiiro Launches AutoFix AI Agent to Automatically Remediate Code and Design Risks in IDEs

Agentic application security platform Apiiro has introduced AutoFix AI Agent, an industry-first AI tool designed to automatically fix design and code risks by leveraging runtime context.

Meeting Developers Where They Are Through MCP Connection

The AutoFix AI Agent operates directly within developers’ integrated development environments (IDEs) without requiring plug-ins, using a remote Model Context Protocol (MCP) connection. “We’re meeting developers where they are—in their IDEs with deep code-to-runtime context—and giving them the secure path forward without slowing them down,” said Moti Gindi, Chief Product Officer at Apiiro. “It’s about empowering developers to fix risks and not vulnerabilities—in real time, with the runtime context, software architecture, and organization policy.” Apiiro highlights the rapid growth of AI coding assistants such as GitHub Copilot, Gemini Code Assist, and Cursor as a key motivator for developing this tool. These AI assistants often operate with limited or no runtime context and lack governance by existing security tools, which can introduce vulnerabilities, unvetted technologies, business logic risks, and code that bypasses organizational security policies and architectural standards.

Risks Found in AI-Generated Code

According to the Center for Security and Emerging Technologies (CSET) at Georgetown University’s Walsh School of Foreign Service, up to 50% of AI code assistants generate code containing vulnerabilities, with 10% of those vulnerabilities being actively exploitable and having true business impact. CSET states that large language models (LLMs) and other AI systems pose direct and indirect cybersecurity risks by generating insecure code, being vulnerable to attack and manipulation, and causing downstream cybersecurity impacts such as feedback loops affecting future AI training. The report emphasizes that while evaluation benchmarks for code generation models focus on functionality, they often neglect security, potentially deprioritizing secure code generation during model training.

Tool Built for Scale and Reliability

The AutoFix AI Agent scales expertise across development teams by automatically generating threat models for risky feature requests before any code is written. It also fixes findings related to Static Application Security Testing (SAST), Software Composition Analysis (SCA), secrets, and API security. Leveraging unique runtime context, the agent makes precise, risk-based decisions by understanding each organization’s software architecture, security policies, business impact, and risk acceptance lifecycle. This enables it to deliver autofixes aligned with enterprise standards rather than generic solutions. “AI code assistants represent one of the most transformative productivity tools of our lifetime. But by focusing solely on code, they lack context—missing critical signals like security policies and standards, compensating controls, and business risk,” said Idan Plotnik, Co-founder and CEO of Apiiro. “This disconnect introduces significant risk to enterprises, as ungoverned AI coding tools are adopted faster than application security teams can keep up. Our AutoFix AI Agent doesn’t just detect issues—it intelligently fixes them using the same contextual understanding and organizational knowledge that application security and risk management teams rely on to make informed decisions.” The AutoFix AI Agent utilizes data from Apiiro’s platform that maps software architecture across all material changes, powered by Deep Code Analysis (DCA), Code-to-Runtime matching, and the Risk Graph engine. Core capabilities include:
  • AutoFix: Automatically fixes design and code risks with runtime context.
  • AutoGovern: Enforces policies, standards, and secure coding guardrails automatically.
  • AutoManage: Automates risk lifecycle management and measurement across the software development lifecycle (SDLC).
  • “In a world where AI generates code, no software should ship without an AI AppSec agent securing it,” said Plotnik. “We’re enabling security teams to unlock full developer productivity while automatically fixing the most critical risks to the business.”
    AI is impacting nearly every aspect of the channel, with more channel marketers adopting the technology to equip themselves with essential tools and strategies. For more information, visit the original article at Channel Insider.

    Frequently Asked Questions (FAQ)

    About the AutoFix AI Agent

    Q: What is the primary function of Apiiro's AutoFix AI Agent? A: The AutoFix AI Agent is designed to automatically remediate code and design risks directly within an Integrated Development Environment (IDE). Q: How does the AutoFix AI Agent differ from other AI coding assistants? A: Unlike many AI coding assistants that lack runtime context, Apiiro's agent leverages deep code-to-runtime context to make precise, risk-based decisions. Q: What types of risks can the AutoFix AI Agent address? A: It can fix design and code risks, including findings related to SAST, SCA, secrets, and API security. Q: How does the AutoFix AI Agent integrate into a developer's workflow? A: It operates directly within the developer's IDE without requiring plugins, using a Model Context Protocol (MCP) connection. Q: What motivated Apiiro to develop this tool? A: The rapid adoption of AI coding assistants like GitHub Copilot and Gemini Code Assist, which often lack security governance and context, motivated Apiiro to create a more secure solution.

    Risks Associated with AI-Generated Code

    Q: What percentage of AI code assistants generate code with vulnerabilities? A: According to CSET, up to 50% of AI code assistants generate code containing vulnerabilities. Q: What is the potential impact of vulnerabilities in AI-generated code? A: Up to 10% of these vulnerabilities can be actively exploitable and have a true business impact. Q: What are the broader cybersecurity risks posed by AI systems like LLMs? A: LLMs and similar AI systems can generate insecure code, be vulnerable to attacks, and create feedback loops that negatively impact future AI training.

    Tool Capabilities and Benefits

    Q: How does the AutoFix AI Agent ensure fixes are aligned with enterprise standards? A: It uses unique runtime context, understanding an organization's software architecture, security policies, business impact, and risk acceptance lifecycle to deliver enterprise-aligned autofixes. Q: What is the "Risk Graph engine" mentioned in relation to the AutoFix AI Agent? A: The agent utilizes data from Apiiro's platform, which maps software architecture using Deep Code Analysis (DCA), Code-to-Runtime matching, and the Risk Graph engine. Q: What are the core capabilities of the AutoFix AI Agent, beyond just fixing code? A: Its core capabilities include AutoFix (fixing risks), AutoGovern (enforcing policies), and AutoManage (automating risk lifecycle management). Q: How does Apiiro ensure its tool is built for scale and reliability? A: The agent scales expertise by automatically generating threat models for feature requests before code is written and fixing findings across various security testing domains.

    Crypto Market AI's Take

    The introduction of Apiiro's AutoFix AI Agent marks a significant step forward in addressing the security challenges introduced by the widespread adoption of AI coding assistants. In the realm of cryptocurrency development, where speed and innovation are paramount, the potential for vulnerabilities in AI-generated code is a serious concern. Our platform, Crypto Market AI, focuses on leveraging AI for market analysis and trading strategy, but we recognize that robust security practices are fundamental to the entire technological ecosystem. Tools like Apiiro's AutoFix AI Agent highlight the growing trend of integrating AI into every layer of software development, aiming to enhance both productivity and security.

    More to Read:

  • AI-Driven Crypto Trading Bots: The Future of Investment?
  • Understanding Blockchain Security: A Comprehensive Guide
  • The Impact of AI on Cybersecurity