AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
From Autocomplete to Agents: AI Coding State of Play
software-development

From Autocomplete to Agents: AI Coding State of Play

Birgitta Böckeler explores how AI agents transform software development, highlighting productivity, risks, and sustainable usage.

August 13, 2025
5 min read
Birgitta Böckeler

From Autocomplete to Agents: The Evolving State of AI-Assisted Coding

Birgitta Böckeler shares a clear-eyed view of how AI agents are changing software development. She discusses the potential productivity gains, the risks of "vibe coding," and the long-term impact on code quality and maintainability. She also explains how to effectively and sustainably work with these tools. The landscape of AI-assisted coding has evolved significantly, moving beyond simple autocomplete suggestions to sophisticated AI agents. These agents, while promising substantial productivity gains, also introduce new challenges like "vibe coding" and potential impacts on code quality. Understanding how to work with these tools effectively is key to harnessing their benefits while mitigating risks.

The History of Features

AI coding assistants began as enhanced autocomplete tools, suggesting code snippets or entire method bodies based on comments or method signatures. Over time, chat features were integrated directly into IDEs, allowing developers to ask questions without leaving their environment. For example, asking if Python has static functions, a question that might otherwise require extensive web searching. Further IDE integration introduced Quick Fix menu options like "fix this using Copilot" or "explain this using Copilot." Developers gained more control by pointing assistants to specific files for context. Inline chat windows allowed incremental code changes with visible diffs, enhancing control and review. More advanced chat features enable querying unfamiliar codebases, effectively improving on traditional text search by turning questions into search queries. Different assistants use various techniques, such as vector embeddings of codebases, to provide context-aware answers. Context providers became increasingly important, allowing assistants to pull in relevant information such as git diffs, local changes, terminal output, or documentation URLs. Integration with tools like Jira and Confluence also emerged. Meanwhile, underlying AI models evolved from Codex and GPT-3.5 to GPT-4 and specialized models like Anthropic's Claude Sonnet series, which has become very popular for coding tasks. Reasoning capabilities improved, aiding in planning and debugging.

Potential Impact of Coding Assistants on Cycle Time

Estimating the impact on development speed involves considering how much time developers spend coding, how often assistants are useful, and the speedup they provide when used. For example, if coding is 40% of cycle time, assistants are useful 60% of that time, and they speed up tasks by 55%, the overall cycle time improvement is about 13%. While this is less dramatic than some marketing claims, it is still a meaningful productivity gain.

GenAI Tooling - A Moving Target

The landscape is rapidly evolving with the introduction of AI agents, especially supervised agents where developers remain in control and supervise the AI's actions. Autonomous agents that complete entire tasks without supervision are not yet reliably practical. Open-source projects like Cline pioneered agentic modes before many well-funded commercial products. Most current experience is with agents editing existing codebases rather than creating projects from scratch. Agents now work on larger contexts, often spanning multiple files.

What is an Agent?

In AI-assisted coding, an agent is a coding assistant that orchestrates prompts with context from the codebase and has access to tools like reading and changing files, executing commands, and running tests. It interacts with the large language model by describing available tools and requesting actions, receiving feedback such as test results, and iterating accordingly. Examples include agents modifying code, running tests, performing web research to solve dependency issues, and automatically fixing lint errors by reacting to IDE warnings.

Agentic Modes

Agentic modes represent a significant step forward, where developers supervise AI performing multi-step tasks. This interaction can reduce cognitive load and help with design thinking by generating multiple options. New standards like the Model Context Protocol (MCP) enable coding assistants to interact with local or remote servers that provide additional context or capabilities, such as browsing applications or querying test databases. However, MCP servers are currently a security risk due to their open and unregulated nature. Custom instructions or rules allow developers to configure assistants with project-specific knowledge or preferences, improving consistency and usability. However, these can also introduce vulnerabilities if sourced uncritically from the internet. Effective workflows with agents include planning together before coding, keeping sessions small with concrete instructions, and maintaining memory (e.g., a Markdown file) to track progress and context across sessions.

Vibe Coding and AI as a Teammate

"Vibe coding" refers to a style where developers rely heavily on AI, sometimes without reviewing code closely, iterating via chat or voice commands until the AI produces acceptable results. While useful for quick tasks or prototypes, it carries risks if used exclusively. Birgitta suggests thinking of AI as a teammate with characteristics: eager, well-read but inexperienced, stubborn, and polite. Understanding these traits helps developers know when to trust or verify AI outputs.

AI Missteps

AI assistants can make mistakes such as proposing brute-force fixes without understanding root causes, generating unnecessary wrapper methods for backward compatibility, or producing verbose and brittle tests. They may also misplace tests or fail to follow test-driven development practices effectively. Design issues can arise, such as unnecessarily increasing parameter complexity or dependencies, leading to codebase sprawl.

Impact Radius of AI Blunders

Mistakes can have varying impacts:
  • Commit level: Obvious errors that may slow development.
  • Team level: Introducing friction due to inefficient workflows or broken features.
  • Codebase lifetime: Long-term maintainability risks from poor design or brittle tests.
  • Studies show increased code churn and duplication correlating with AI-assisted coding adoption, signaling potential future maintainability challenges.

    Working With AI Agents

    AI is now a permanent part of the developer toolbox. To use it responsibly:
  • Avoid complacency and review AI-generated code carefully.
  • Use vibe coding sparingly and only when appropriate.
  • Define clear feedback loops and testing strategies.
  • Employ code quality monitoring tools like SonarQube or CodeScene.
  • Integrate AI checks early in the development pipeline (e.g., pre-commit hooks).
  • Foster a culture that balances experimentation with skepticism, encouraging collaboration between enthusiasts and skeptics.

  • Frequently Asked Questions (FAQ)

    AI-Assisted Coding in General

    Q: What are the primary benefits of using AI agents for coding assistance? A: AI agents can significantly boost developer productivity by automating repetitive tasks, suggesting code, answering technical questions, and even helping with debugging, thereby potentially reducing development cycle times. Q: What are the risks associated with AI-assisted coding? A: Key risks include "vibe coding" (over-reliance without critical review), potential introduction of subtle bugs, increased code churn, and long-term maintainability issues if AI-generated code isn't carefully managed and reviewed. Q: How has AI-assisted coding evolved over time? A: It has progressed from basic autocomplete suggestions to integrated chat features within IDEs, offering quick fixes, code explanations, querying codebases, and leveraging context providers for more relevant assistance.

    Working with AI Agents

    Q: What defines an "agent" in the context of AI-assisted coding? A: An agent is a coding assistant that can orchestrate prompts, utilize codebase context, and perform actions like reading/changing files, executing commands, and running tests, interacting with large language models to achieve coding tasks. Q: What is "vibe coding" and why is it a concern? A: "Vibe coding" is a development style where developers rely heavily on AI, often accepting its output without thorough review, aiming for a "good feeling" result. It's a concern because it can lead to overlooked errors, lack of understanding of underlying code, and reduced code quality. Q: How can developers work effectively and sustainably with AI coding agents? A: Effective workflows involve planning with the AI, keeping tasks concrete, maintaining session memory for context, avoiding complacency, using "vibe coding" sparingly, defining feedback loops, and integrating code quality monitoring tools.

    AI Missteps and Their Impact

    Q: What are common types of AI missteps in coding? A: AI assistants can generate brute-force fixes, create unnecessary wrapper methods, produce verbose or brittle tests, misplace tests, and sometimes fail to adhere to development practices like TDD. They can also introduce design flaws like increased parameter complexity. Q: What is the "impact radius" of AI blunders? A: AI blunders can have an impact at the commit level (slowing development), the team level (creating workflow friction), or the codebase lifetime level (introducing long-term maintainability risks).

    Technical Aspects of AI Agents

    Q: What are Model Context Protocols (MCPs) and what are their security implications? A: MCPs allow coding assistants to interact with servers providing additional context or capabilities. However, they currently pose a security risk due to their open and unregulated nature. Q: How do custom instructions or rules benefit AI coding assistants? A: Custom instructions allow developers to configure assistants with project-specific knowledge or preferences, improving consistency and usability. However, they can also introduce vulnerabilities if sourced uncritically.
    Source: Originally published at InfoQ on Wed, 13 Aug 2025 09:04:00 GMT.

    Crypto Market AI's Take

    The evolution of AI agents in coding mirrors the advancements we're seeing in financial markets. Just as AI agents are revolutionizing software development by automating complex tasks and providing intelligent assistance, our platform at Crypto Market AI leverages cutting-edge AI to analyze cryptocurrency markets, identify trading opportunities, and provide actionable insights. We understand the importance of robust, context-aware AI, much like the need for AI coding assistants to have access to relevant codebases and tools. Our commitment to providing reliable, data-driven analysis aligns with the drive for quality and maintainability in AI-assisted coding. Exploring how AI is transforming one sector can offer valuable perspectives on its potential in others, including the dynamic world of cryptocurrency.

    More to Read:

  • AI Agents: The Future of Business Automation and Customer Engagement
  • Understanding AI Agent Washing: Risks and Realities