Working with AI Code Generation: A Developer's Field Guide

Working with AI Code Generation: A Developer's Field Guide

From Code Monkey to Code Referee: How AI Is Forcing Us to Finally Become Real Engineers

After writing about the risks of AI-induced technical debt for engineering managers, I want to share some practical advice for us developers. I’ve spent months experimenting with different ways to use AI coding tools, making all the mistakes so you don’t have to. Well, at least not all of them.

The Big Shift Nobody Talks About

When you start using AI to generate code, you’re no longer primarily a code writer. You become a code reviewer. All day, every day. Think about that for a moment.

How much time have you spent reviewing other people’s code in your career? Now imagine that’s most of what you do. Reviewing somebody else’s code for hours is tiring, especially when it’s complex. With AI development, this is your new normal.

The code feels foreign - like it was written by an over-eager junior who’s great at Googling Stack Overflow but doesn’t quite grasp the bigger picture. It’s filled with clever tricks and nice library usage, but often misses what experienced developers bring: clean structure, good abstractions, and the understanding of what to leave out.

As they say - great art or great design is not done when there is nothing more to add, it is done when there is nothing left to remove. The insights needed to generalize, abstract and simplify are not (yet) the core competencies of AI-driven development.

Code Rot Happens Much Faster

In a recent Swift project, I watched how quickly things got messy. After just three days of back-and-forth with AI, I had code that almost worked but was becoming a nightmare. The issues were familiar but amplified:

  • It kept creating new variables despite existing ones that did the same thing
  • It mixed new and old Swift APIs (ObservableObject vs @Observable)
  • Functions were duplicated with slight variations
  • Old implementations weren’t cleaned up when adding new approaches
  • State management became increasingly confusing

Most organizations are already struggling with technical debt under normal circumstances. The average codebase has plenty of issues, and there’s constant pressure to deliver features over quality improvements. Now imagine accelerating that problem by 5-10x. That’s what happens when AI generates code without proper guardrails in place.

Practical Approaches That Actually Work

Despite the challenges, after using AI for months, it’s a tool I wouldn’t want to work without. It’s a huge help if we use it right. Here are some techniques I’ve developed that help me get the benefits of AI without the mess:

1. Create and Maintain an Architecture Overview

Before generating any significant code, build a simple design document outlining:

  • What each component should do (and not do)
  • How you’ll handle state and events
  • Threading and concurrency approaches
  • Key interfaces between components
  • And whatever else you think is important when working with your code base

This isn’t just paperwork - it’s your map. Before each session with AI, review and update this document. Make the AI read it before generating code. You’ll be amazed at how much better the results become when the AI understands the bigger picture. And you can use AI to help write these documents. It can look through your code base and make a first draft. You might think, that if it can do this, and generate that overview, why is it then necessary to write the document first. Doesn’t it just know this, while writing new functionality? Unfortunately, this is not how AI works (currently). The newer ’thinking’ models, combined with AI agents may be better at it, but you realy need to be involved in this step before the implementation. So make sure this document is accurate, to the point, and in line with how you want the code to be.

The funny thing is, we should always have been doing this. AI just forces us to be explicit about designs we used to keep only in our heads. It also makes it obvious that the code centric documentations should be stored together with the codebase, as text documents (markdown) in the same git repository. This has also, always been a good idea, but many organizations have insisted on using MS-Office to maintain documents, and it has sometimes been hard to make the non-coders understand the benefits of text documents and proper version control using git. But now it becomes essential.

2. Create Special AI Exploration Branches

Small isolated code changes tend to work well with AI. But sometimes it’s nice to use AI when exploring new territory - new libraries and frameworks, or some part of the system where you’re trying to figure out the best implementation.

There may be times to “move fast and break things.” But these are experiments - not production code. Keep them isolated until you understand what you’ve learned.

Don’t let AI generate experimental code directly in your main branches. Create dedicated branches with a consistent prefix like aix/ for AI-eXploration. This gives you several benefits:

  • It’s clear to everyone (including future you) that this is experimental code
  • You feel free to explore without commitment
  • These branches can be automatically cleaned up later
  • You force yourself to review before merging anything

I’ve started treating these branches as learning tools, not as sources of final code. I explore options, take notes on what works, then implement a clean version myself in the main codebase. Or maybe even use the AI to extract the learnings to a (markdown) document describing the new feature and implementation guidelines, and then let AI do it right, the second time. This works better than to ask it to clean up.

3. Work in Small, Structured Steps

The quality of AI-generated code directly relates to how you structure your interactions. Here’s what works for me:

  1. Start with architecture first Discuss the approach before any real code appears. Capture this in one or more markdown files.
  2. Generate skeleton code Get interfaces and structure before implementation details. I frequently add this to the design document (with AI assistance), as it is faster to iterate over, and no code to cleanup during the refinement.
  3. Ask explicitly for simplification Ask for simplification in the design process and while exploring implementation ideas, before it changes the actual code. AI is not that good at cleaning up after the fact.
  4. Implement one small piece at a time Review thoroughly after each piece
  5. Refactor the code Before merging changes to the main branch, refactor until you’re happy with the code. This step is really important - it helps you regain ownership of the code.
  6. Document key decisions Note why you chose certain approaches for future reference

This structured approach helps you stay in control and prevents the rapid quality decay that happens in free-form AI sessions. The added benefit is that I now have much more design and architecture documentation than I used to.

4. Watch for These AI Code Smells

Beyond the usual code smells, I’ve noticed some patterns specific to AI-generated code:

  • Variable Soup: Creating new variables instead of using existing ones
  • API Time Travel: Mixing deprecated and current approaches
  • Ghost Code: Fragments left over from previous iterations
  • Library Overload: Adding dependencies for simple operations
  • Copy-Paste-Mutate: Similar functions duplicated with slight differences

When you spot these, it’s time to step back and refactor. More importantly, guide the AI to avoid these patterns in the first place. It helps to have a document with general coding guidelines, and standard prompts for the AI, to be included with every session with the AI. Some (most) AI tools have conventions for this. F.ex. the Claude Code agent looks for a file called CLAUDE.md in the root of the project. For others it is a set of VSCode settings

5. Try Different AI Models

Not all AI models work the same for code generation. I’ve found Claude 3.7 works best for my Swift code, but your mileage may vary. I’ve also tried Gemini, Grok, OpenAI, DeepSeek and others. It’s a constantly moving field, and it changes week by week. So get used to using more than one. There can also be differences between how well they work with your IDE, or fit into your workflow.

Sometimes getting a different perspective from another model helps clarify the best approach. It’s like getting a second opinion from another developer.

New Rules for a New Game

The rules of development are changing with AI, and we need to adapt our practices:

Code Reviews Need to Change

When reviewing AI-generated code:

  • Check every line - don’t assume anything
  • Look specifically for the AI code smells mentioned above
  • Confirm that the approach matches your architecture document
  • Challenge complexity - there’s almost always a simpler way

Refactor More Often

Set a regular schedule for refactoring after AI sessions. I’ve found that cleaning up after each significant feature addition prevents the codebase from spiraling out of control.

Focus on:

  • Consolidating similar implementations
  • Creating proper abstractions
  • Aligning with architectural principles
  • Removing unused code fragments

Documentation Becomes Essential

In traditional development, documentation is often an afterthought. With AI, it becomes your primary tool for maintaining quality. Your architecture document should evolve with your understanding, becoming a living record of what you’re building and why.

The Upside: Better Engineering Practices

There’s a silver lining here. The discipline required to work effectively with AI forces us to adopt practices we should have been using all along:

  • Clear architecture documentation
  • Regular refactoring
  • Thoughtful reviews
  • Intentional design
  • More comprehensive testing

Testing becomes even more critical with AI-generated code. You’ll need tests to ensure that the behavior remains correct through iterations and refactoring. Good test coverage also helps you build confidence that you understand what the AI has produced, and if you are using an AI agent, you can make it continue until all the tests pass.

These were always best practices, but now they’re no longer optional if you want to maintain sanity in your codebase.

Take Back Control

AI coding tools are powerful additions to our toolkit, but they don’t replace engineering judgment. Your job shifts from typing code to guiding implementation toward clean, maintainable solutions.

This isn’t bad news - it lets us focus more on the architectural and design aspects that are the most valuable parts of our craft. The AI handles the keystrokes; we handle the thinking.

At the end of the day, we still take full responsibility for the code we deliver. The AI is just a tool, like any other. Don’t blame the tool for messy code - blame the craftsman. (Yes, I’m looking at you. And me.) And if AI allows you to develop a feature 3x faster, then spend the majority of the gained time on improving the quality of code. We can still deliver faster and increase quality.

I plan to write a follow up article with a deeper dive into the specific tools and prompts I have used, so stay tuned for more…

What’s your experience been with AI coding tools? Have you found effective ways to keep quality high while benefiting from the productivity boost? I’d love to hear your approaches.

(This article is also posted on my website : https://agilecoach.dk/posts/ai-developer-field-guide/ )

Niels Buus

Freelance Ruby on Rails / Python Developer / DevOps / Available from October 2025

1mo

Hi Bo Very insightful and well articulated. I feel like AI doesn't just make me faster - it also makes me better by providing fresh/new/alternative perspectives in every response. Whenever AI produces code, it forces me to vet it thoroughly (because you can't trust AI to be accurate here). Each time, I need to check if the proposed solution is up to my standard. And sometimes, it's not just up to my standard, but actually exceeds it in some aspects. That's really rewarding and makes the job of being a "code referee" feel like less of a chore and more of a learning opportunity.

It's inspiring to see how you've navigated the challenges and found ways to leverage these tools effectively. We'll continue reading it!

Like
Reply
Jannik S.

Backend engineer [TypeScript | Python | Go | DevOps]

1mo

Cool, will read!

To view or add a comment, sign in

More articles by Bo Frese

Insights from the community

Others also viewed

Explore topics