The problematic path of least resistance for coding agents
How AI coding agents often treat symptoms instead of root causes, leading to technical debt. Some preventative measures to ship better code.
What’s interesting in coding with AI is that models usually take the path of least resistance. This means a problem with a root cause at the database level might surface as a UI problem. If you have a lazy model, it will most likely attempt to solve the symptom (in this case, in the UI). While that might be fine at times, a human has to a) know that this is treating the symptom, and b) understand what implications this will have down the line.
When vibe coding, you won’t know this unless you actually look into the code. This is why it’s so important to be able to move between different layers of abstraction. Let me give an example:
You prompt:
It's taking a while to load the table data when refreshing the page. Fix it.
The model will start to look at various parts of the code to figure out how to solve it. Notice, I didn’t say “to find the root cause of the problem.” Some models are better at this than others. For example, with GPT-5-high and Codex we can see that they really try.
The path of least resistance might be to:
- Add a skeleton loader to the table
- Cache data in the frontend
- Cache data in the backend
Because this solves the problem by making subsequent page refreshes load faster! And that’s true: it does solve the problem. But is there a better way? Probably.
An engineer familiar with the codebase might examine the full data lineage from client to server to database, and back. While doing this, they might find an imperfect SQL query that can be optimized significantly, causing all queries to run much faster and eliminating the need for skeleton loaders and caches altogether. In the end, these are two solutions with very different implications.
Adding a cache not only introduces complexity that must be maintained over time, but it might also introduce subtle bugs by requiring cache invalidation in certain places. Changes like this compound, and each one increases the likelihood of breaking things. If you’re working with coding agents, it will also make it harder for them to modify things in the future due to the accumulated complexity, also known as tech debt.
On the other hand, fixing the root cause minimizes tech debt, teaches future coding agents what a performant vs. non-performant query looks like, and compounds in a positive way.
The problem is that if you don’t know the codebase, or you’re not looking at the code, you won’t know what good and bad look like. Implementing a cache works, and so does optimizing the query. The danger is that this will creep up on you over time, and at that point, it will be hard to reverse.
How do you prevent this from happening while still shipping fast? A couple of things you can do are:
-
Find the root cause. If you’re not able to yourself, use a really smart model and prompt it to find the source first. Instead of:
It's taking a while to load the table data when refreshing the page. Fix it.
You can say:
It's taking a while to load the table data when refreshing the page. Thoroughly help me find the root cause of the issue and don't stop investigating until you're confident you've found one or multiple reasons for this issue.
Once the model has investigated, you can examine and reason for yourself what is most reasonable.
-
Ask domain experts to build out systems. Certain people have probably worked extensively in particular areas of the code. Ask them to set up rules and instructions for agents that can prevent repeating past mistakes. Outlining edge cases, learnings, and gotchas makes a big difference for coding agents (and humans too).
-
Use more compute. Running multiple and specialized agents on your code can help you find issues you wouldn’t have found yourself. It’s more costly, but accumulating tech debt over time is 100x more costly. I’ve been there pre-coding agents, trust me on that one.
In the not-too-distant future, I think many of these problems will go away with better models powering coding agents. They will think more deeply, be more intelligent, and have more intrinsic knowledge to prevent some of the issues mentioned above. Time to resolution will be a function of codebase size and cost. A large codebase exposes more surface area where bugs could be introduced, but spending more tokens will speed it up.
I’m looking forward to seeing this become reality. We’re almost there.