10 Min reading time

AI Agents Are Rapidly Moving Us from Coding to Better Architecting

13. 01. 2026
Overview

Using AI agents for development isn’t about coding faster. It’s about shifting focus on architecture, design, and documentation. Projects that invest in clear structure and documentation get much better AI output. The real productivity shift is from coding to architecting and designing.

After several months of using the Claude Code AI agent (CC), software engineering is starting to feel closer to how engineering should feel. Using an AI agent for software development isn’t about coding faster. Tasks like architecture design, planning, learning, experimentation, and documentation.

We’ve always wanted to do those things well, but the pressure to ship fast often left them undervalued. AI-assisted development with AI agents like Claude Code now changes that completely. It is not changing what engineering is, but it is changing what tasks get more time and more attention in practice. With AI, execution becomes cheaper, which enables fast iteration and makes good decision-making and good documentation even more important.

Engineering Instead of Manually Coding

Before, a meaningful part of my day went into manual implementation. Now, with CC in the loop, I spend most of my time on:

  • Exploration and experimentation by rapidly prototyping and testing design options.
  • Learning fast and making grounded decisions fast.
  • Documenting decisions via Architecture Decision Records.
  • Defining module boundaries, responsibilities, and interfaces.
  • Documenting conventions for project structure, coding patterns, and rules.
  • Designing test strategy, conventions, patterns, and coverage expectations.
  • Orchestrating and evaluating AI Agent work

The last one is a new skill that I train myself with almost every day. The rest is nothing new. It’s just much easier and faster to do it with an AI agent.

The result is that for me, development became less about “write the next function” and more about engineering, learning, making and documenting decisions so the AI agent and the team stay aligned.

The Unexpected Winner: AI Coding Tools for Developers

If I had to name the single biggest productivity gain with Claude Code, it’s the documentation. CC is great at generating and updating docs continuously.

ADRs, project structure documentation, coding conventions, patterns, usage docs, test strategies, these artifacts typically lag behind (or never get written at all). With AI, they can be maintained as first-class artifacts. More importantly, they gain significant new value by becoming the context that makes AI agents effective.

From Guesswork to Experimentation

There’s a psychological shift when trying an idea becomes low-cost.

Manually coding every approach to “see what works” is rarely possible. With CC, I can explore multiple options quickly, compare tradeoffs, and discard what doesn’t hold up. That used to be too slow, and I would sometimes be froced to settle for “good enough.” Now I don’t have to.

Troubleshooting Delegation

CC speeds up troubleshooting by automating issue reproduction. Instead of repeatedly restarting the app, clicking through UI flows, and stepping through code line-by-line in a debugger, I can paste an error log or screenshot to CC and ask it to investigate.

Not all problems are discoverable this way, but most of them are. Issue discovery is typically more time-consuming than implementing a fix, and automating it even partially is a significant time saver.

Test Design Over Test Data Management

Automated testing usually fails because test writing and test maintenance become a second product. AI agent like CC helps in this case, too. If the project has a well-defined test strategy, documented patterns, and anti-patterns AI agent can output good testing code. That leaves us with more time to focus on what matters more, like reviewing and designing test scenarios that validate business logic and catch edge cases.

There’s also a direct benefit for AI-assisted development because good test design makes AI code output verification much easier. If CC implements a feature, a comprehensive test suite immediately shows whether the implementation is correct. Good test design enables better AI output verification, and they reinforce each other.

The practical result is that AI handles the boilerplate (test data, setup, teardown, assertion syntax) while I invest time in edge cases, failure modes, and business logic validation that actually prevent production incidents.

Designing and architecting instead of coding with AI Agent

Lessons from the Trenches

Based on direct experience running CC on projects, here’s what delivered good results to me:

Set Up Your Project as an Analyzable Artifact

A well-documented project with good coverage of business and technical documentation is easier for CC to understand. CC is good at pulling context from code, but explicit documentation leaves less room for misinterpretation. It also makes the AI agent faster and saves tokens.

When Claude Code operates with a well-documented project, it generates code that fits your system’s philosophy. With proper context, the difference in output quality is substantial.

Document Requirements

I like to keep a set of product and delivery docs close to the codebase. A project charter, product requirements (functional and non-functional), and a detailed set of epics and stories. These can easily be created in an issue tracker like GitLab or GitHub with the help of MCP servers, but I like keeping versions in the repo because it’s easier to review changes over time and easier to include in CC sessions. I use CC to generate these files too, with predefined templates and commands.

CC can also be used CC for brainstorming sessions. I’ve used Anthropic Skills and the BMAD Method on several occasions with good results. With CC skills, you get official instructions from Anthropic for documentation generation and brainstorming. With BMAD, you get the Analyst agent and can use the *brainstorm command for structured ideation techniques, or use Multi-agent Group Chat via *party-mode. These repositories are excellent starting points for tailoring a CC agent to your engineering process. You can adjust them to your needs over time.

Experiment and document

When I want to evaluate a design pattern, I start by describing the goal, constraints, and other context that matters. Then I ask Claude Code in a higher-reasoning mode (with ultrathink switch) to produce a plan or implement a small spike.

After reviewing the result, I treat the AI agent like a sparring partner. I ask it to explain assumptions, list tradeoffs, and compare the approach against the project’s documentation. If something does not feel right, I challenge its approach with additional questions. “Did you take into consideration our architecture?” “Is this the best practice approach?” “What if we need to scale this?” “What does a simpler alternative look like?” Sometimes I ask it to optimize a part or investigate additional documentation, and other times I ask it to explain the mechanism so I can validate the reasoning.

These research loops are extremely valuable to me because they turn vague assumptions into explicit criteria. Once I’m confident, I have it capture the outcome as a clear decision. Proceed or reject, plus the rationale, risks, and the implementation plan.

Define the Technical Contract

The documentation I generate isn’t something new. It’s the same docs strong teams should maintain anyway. The difference is that now it directly improves AI output quality.

Here’s the documentation structure I maintain for every project:

  • Project Usage Guide – How to build, run, and deploy
  • Architecture Decision Records (ADRs) – The why behind technical choices
  • Project Design – System boundaries, module responsibilities, data flows
  • Project Structure – Folder organization and file placement conventions
  • Coding Conventions – Naming, formatting, patterns, and anti-patterns
  • Testing Strategy – Test types, structure, patterns, and anti-patterns

I use CC commands to generate ADRs after research and exploration sessions. The pattern is usually explore options → experiment → decide → document. The ADR captures not just the decision but the alternatives considered and the reasoning behind it. This context is very valuable when CC needs to make related decisions later.

The key practice is iterating documentation alongside implementation. When I discover that a documented convention doesn’t work in practice, I update the documentation immediately. The docs evolve with the codebase.

For session priming, I start each CC session by loading the project usage guide and providing a manifest of available documentation. CC learns to pull in additional files when relevant context is needed.

Well-documented code conventions, patterns, and architectural decisions matter. When Claude Code reads explicit decisions, it behaves better and more predictably.

The “Reference Project” Strategy

CC is great at pattern matching. If you have a project that is well-architected, you can use it as a reference. Ask CC to analyze a concept, pattern, or feature in Project A, compare it to Project B, and produce a gap report. Based on that report, you can then start planning an implementation of the same concepts and features in Project A.

I’ve used this approach on existing brownfield projects to:

  • Restructure frontend code into a more maintainable folder structure
  • Introduce infrastructure for frontend tests (unit and integration) where testing was completely missing
  • Implement a database migration strategy where none existed
  • Apply consistent error handling and logging where it was inconsistent or absent
  • Enhance validation logic for increased robustness and security
  • Implement internationalization from scratch where it was missing

Good examples compress design time. Every well-designed, well-structured project can serve as a design reference.

Plan Mode

I initially used spec-driven development frameworks like BMAD and GitHub Spec Toolkit for complex tasks. They work well, but with the CC plan mode, you get spec-driven development out of the box. With skills feature and recent CC improvements in automatic subagent spawning, parallel execution, and persistent plan files, these frameworks are now optional.

I now start almost every non-trivial session in plan mode, regardless of whether output is code, documentation, or analysis reports. The pattern is to always describe the goal, review the plan, adjust if needed, and execute. The upfront investment in planning reduces session time and improves output quality.

For complex tasks, CC plan mode spawns subagents to research different aspects in parallel, then synthesizes findings into a coherent plan.

Commands, Subagents, and Skills

I maintain roughly 15 custom commands for repetitive workflows: priming new sessions, generating documentation types, updating existing docs, and triggering implementation patterns. Commands eliminate prompt engineering overhead. Instead of crafting the same detailed prompt repeatedly, I save it as a command. In CC, commands can even be parameterised.

For specialized work, I use about 5 subagents. General development, code review, QA/testing, frontend expertise, and backend expertise subagents. Each subagent has its own context, and that helps me to keep the main session context clean and more durable. When implementing a feature, I like to split tasks among different subagents to make sure that I will be able to complete the implementation within a single session.

Troubleshooting in the Background

Troubleshooting takes a lot of time. It consists of repetitive work of simulating user flow, stepping through code, inspecting variables, waiting for a rebuild and restart. AI tools are well-suited for that type of work.

All this repetition of tedious tasks can be offloaded to an AI agent like CC in a separate session. My approach to troubleshooting is to add the error context into a new CC session and ask it to try to resolve. Then I switch back to my primary work in a separate CC session. CC troubleshoots in the background and repeats user flow, reads logs, adds diagnostic output, tries to fix, restarts, and reiterates. These iterations will take a lot of time, but when they run in a separate session while I continue working on something else, I don’t care.

Not every issue gets solved this way, but even a partial investigation saves significant time, and it pays off to try this approach first. Even repeat it several times, and then fall back to manual if it’s not progressing. You don’t lose anything since you are working on something else in parallel.

Managing Costs

A quick note on costs. Heavy CC usage burns a lot of tokens. If you’re on a per-token plan, costs will be too high. Subscriptions like Max or Pro with flat-rate pricing are essential.

However, there’s an alternative that also works well. You can use CC with your own models. If you’re running infrastructure with capable hardware, vLLM 0.11.2+ now supports serving models via the Anthropic API format. For existing OpenAI API deployments, LiteLLM can be used to translate between formats. Open-source models like Qwen3-Coder and newer models like MinMax M2.1 and GLM 4.7 work well with CC, delivering good results while giving you full control over costs.

We’re currently running this setup with Qwen3-Coder, and the results are comparable to Claude Sonnet for many tasks.

The Real Paradigm Shift

The change with AI software development in Europe isn’t just about AI writing code. The projects that will succeed with AI assistance aren’t the ones that generate the most code the fastest. They’re the ones that invest in proper documentation, clear architecture, and thoughtful design and use AI to handle the mechanical translation of those good ideas into working code.

The real productivity shift isn’t about removing the human from coding. It’s about elevating the human to design.

Perhaps CC and similar tools will become so capable that they can produce high-quality systems from design to implementation entirely autonomously. Right now, that’s not the case. Will it be in the future? Time will tell.

Kontakt

Falls Sie Fragen haben, sind wir nur einen Klick entfernt.

Diese Seite ist durch reCAPTCHA geschützt. Es gelten die Datenschutzrichtlinie und die Nutzungsbedingungen von Google.

Kontaktieren Sie uns

Vereinbaren Sie einen Termin mit einem Experten