AI in software development is getting insanely powerful.
LLMs, copilots, agents, multi-step workflows…
…on paper, it all looks like massive productivity gains. And sometimes it is. But in practice, I’ve seen something very different happen over and over again: developers actually lose time, not gain it.
The main reason is simple:
AI can produce code that looks correct, works on the surface, and is confidently explained, but is subtly wrong.
And those subtle mistakes are often the most expensive ones.
The real problem: distance from the code
With modern AI tools, it’s very easy to become distant from the actual codebase.
Instead of designing, reasoning, and implementing, we:
- write a big prompt
- wait for a large chunk of code
- skim it quickly
- trust that “the AI probably knows better”
The code often runs. Tests might even pass. And the AI will confidently tell you the solution is solid and production-ready.
The amount of agents and MCPs added to the AI coding tool is so big nowadays that it is increasingly harder to discover where things could go wrong. And worse, developers tend to trust it more and more because it gives the sense that the ideal prompt, rules, and custom agents will be able to handle hallucination properly.
That’s where the danger is.
Because now you still have to read the whole implementation of a code a stranger has written, understand it, validate edge cases, debug weird behavior and repeat until things work as expected.
Except now you’re doing it while fighting authority bias — there’s an “expert” telling you everything is fine.
This is not faster. It’s verification debt.
It actually transforms developers into code reviewers of new cocky and condescending dev (the AI.
Here is one dummy example to illustrate this:
There is a feature in the backend that is used everywhere in the app. This function performs its action and then run some costly observability tasks.
These observability tasks should run in the background because the caller of the function doesn’t care about its response.
The correct approach is to schedule that extra processing asynchronously, fire and forget.
But the AI-generated code did something very subtle:
it awaited that processing instead of scheduling it as a background task.
In Python terms, think:
- what should have been
asyncio.create_task(...) - was instead written as
await some_processing()
The result?
- Everything still worked
- No obvious errors
- But every single call was now slower
- Latency quietly increased across the system
This may be an easy bug to catch, but imagine how subtle bugs like this may pile up and get lost in the mix?
This happens because when developers rely too much on AI, they tend to read on the surface, not line by line. And with large outputs, that tendency gets even worse.
This hurts everyone, from juniors to seniors
For junior developers, this is especially dangerous.
If you rely too much on AI early on, you tend to skip the learning phase, you don’t build strong mental models and you accept solutions that you don’t fully understand.
But this isn’t just a junior problem.
For experienced developers, the risk is skill atrophy.
If you stop practicing fundamentals and let AI do most of the thinking you will most likely lose your instincts, code reviews get softer and your ability to spot subtle issues declines.
And then validating AI output becomes harder and harder, exactly when it matters most.
Ok, but what can we do now?
First rule: ask yourself if you even need AI here
One habit I find extremely important is this:
Before using AI, ask yourself:
“Is this actually faster with AI, or am I just outsourcing thinking?”
There are many cases where writing the code yourself is faster, clearer and safer, especially for core business logic, performance-critical paths, concurrency, security-sensitive code.
On the other hand, AI is excellent at things such as UI scaffolding, boilerplate, repetitive glue code, predictable transformations and, my favorite, reviewing my implementation helping me spot bugs.
Use it where it shines. Don’t force it everywhere.
Design first, then let AI fill the boring gaps
A strategy that works extremely well is this:
You design the structure. The AI fills in the details.
As an intermediate or senior developer, you usually already know what you want to build at a high level.
For example, a simple contact page:
- frontend contact form
- validation
- backend API
- database table
- persistence logic
You don’t need the AI to decide what exists.
You already know that.
Instead:
- define the structure yourself
- break it into small, clear pieces
- ask the AI to implement one piece at a time
This keeps context small, hallucinations lower, review manageable and you engaged with the code.
And it’s much faster than crafting one massive prompt and reviewing a massive output.
Small prompts, small context, small steps
Large prompts produce large hallucinations.
If you want better results:
- reduce context
- reduce scope
- reduce responsibility per prompt
Instead of:
“Implement the entire feature end-to-end”
Try:
- “Generate the frontend form UI”
- “Add input validation for these fields”
- “Implement this API handler with these constraints”
- “Write tests for this specific behavior”
You stay in control, and the AI becomes a focused assistant — not an architect making assumptions for you.
Use TDD as a guardrail (not as dogma)
One very powerful pattern is combining AI with TDD-style thinking.
You don’t even need to be strict TDD.
Just:
- define acceptance criteria
- define what “correct” means
- ask the AI to help write tests first (or alongside the code)
Tests force:
- clarity
- explicit behavior
- less hand-wavy logic
They also give you confidence that the AI didn’t just produce something that looks right.
Automate the boring validation
Linters, formatters, type checkers, static analysis — all of these matter even more with AI-generated code.
Humans skim.
Tools don’t.
Let automation catch bad patterns, unsafe constructs, incorrect usage and inconsistent styles.
These are cheap guardrails that pay off immediately. Additionally, these can easily become default rules for the ai in your project or personal configuration.
Stay sharp: practice fundamentals on purpose
Here’s something I strongly believe:
The more AI advances, the more important fundamentals become.
If you’re not practicing DSA, system design, concurrency basics, you slowly lose the ability to judge whether a solution is good or not.
One simple habit I highly recommend:
one LeetCode problem a day
It doesn’t have to be intense.
Just enough to keep your brain in shape.
It makes you more engaged with code and more critical during reviews. It also helps you to be more capable of spotting subtle issues and better at validating AI output.
And yes, it also helps when reviewing code written by teammates who are also using AI.
AI is a power tool not autopilot
AI is incredible.
It can make you much more productive.
But only if you stay engaged, keep thinking, keep practicing and stay in the driver’s seat. The goal isn’t to fight AI. The goal is to use it deliberately, with good habits and strong guardrails.
Otherwise you’re not saving time, you’re just postponing the cost.