Here’s a pattern I keep seeing (and living):
A feature that would have taken 2-3 weeks gets estimated at “2 days with AI.”
It ships in 4. Sometimes 5.
Not because the AI was slow. Because there are three other “2-day AI projects” running simultaneously. Each one spiraling into bugs, edge cases, and integration issues nobody saw coming. Context-switching between half-finished features, fighting fires, somehow falling behind on all of them.
The AI writes code faster than any human could.
But we’re not shipping faster. We’re drowning.
And here’s the uncomfortable part: we’re doing this to ourselves.
The Promise vs The Reality
You’ve seen the headlines. AI coding tools boost productivity by 39%. Developers are shipping faster than ever. The future is here.
What they don’t tell you is what happens next.
Your manager sees those numbers too. And if AI makes you 39% more productive, why can’t you handle 60% more work? Why are estimates still slipping? Why are bugs still happening?
The math doesn’t add up. And you’re stuck in the middle trying to explain why “AI writes the code” doesn’t mean “features appear instantly.”
Here’s what the data actually shows:
A UC Berkeley study found that AI doesn’t reduce work, it intensifies it. One developer they interviewed said it perfectly: “You thought maybe you’d work less with AI. But you don’t work less. You just work the same amount or even more.”
TechCrunch reported last week: teams adopting AI workflows saw expectations triple, stress triple, but actual productivity only go up by maybe 10%.
And here’s the kicker: a METR study found developers expected AI to speed them up by 24%. In reality? It slowed them down. But they still believed it made them 20% faster.
The gap between perception and reality is dangerous. And most of us are living in it.
The High Expectations Problem
This isn’t just coming from management.
Yes, leadership hears “AI can write massive amounts of code” and expects you to prompt your way through multiple features in no time. First try, maybe second. They don’t understand how AI actually works.
But here’s the honest truth: we don’t either.
I thought I did. I thought “AI writes the code, I review it, we ship it” was the workflow. But that’s not what happens.
What happens is: AI writes the code. I start reviewing it. I find issues. I ask the AI to fix them. It creates new issues. I start another feature while waiting. That one has issues too. Now I’m juggling three half-finished features, each with its own set of AI-generated bugs I’m trying to understand and fix.
The promise was velocity. The reality is fragmentation.
And I keep saying yes to more because “it’s just AI, how hard can it be?” But the cognitive load of reviewing, validating, debugging, and integrating AI code across multiple parallel tracks is crushing.
Every feature looks 80% done. None of them actually ship.
The Estimation Trap
Here’s the confession I don’t want to make: I have no idea how to estimate tasks anymore.
Developers have always struggled with estimation. We underestimate until we get burned enough times to realize “add a simple feature” is never simple when there’s an existing codebase involved.
But AI broke our calibration completely.
A task that used to take 3 days now takes… 2 hours? 4 days? Both? Neither? It depends on:
- How well I can describe what I want
- How many edge cases the AI misses
- How much integration complexity exists
- Whether the AI understands the existing patterns
- How many iterations it takes to get it right
So now I swing between two extremes:
Under-estimating because “AI will handle it” — then spending 3 days debugging what the AI generated in 20 minutes.
Over-estimating because “who knows what AI will break” — then looking slow when it actually works the first time.
The old rules don’t apply. New ones haven’t emerged. And this isn’t just an academic problem:
Sprint planning becomes guesswork. Roadmaps turn into fiction. Technical debt compounds faster than we can track it. Trust erodes when commitments slip repeatedly.
When someone asks “how long will this take?” the honest answer is often “I don’t know anymore.”
The Hidden Costs Nobody’s Talking About
The expectation problem is obvious once you see it. But there are other traps hiding underneath:
You’re Not Writing Code Anymore. You’re Validating It.
One researcher described it perfectly: “A senior developer with Copilot doesn’t become a code-writing machine. They become a code-validation machine.”
When AI generates 40% more code, you have 40% more code to review. But reviewing AI code is different than reviewing human code. Humans make predictable mistakes. AI makes plausible-sounding nonsense that looks right until you run it.
Context switching between your work and reviewing AI output costs 20-30% of your focus per switch. When you’re juggling multiple AI-started features, you’re switching constantly.
You’re not more productive. You’re just more exhausted.
You Say Yes to Everything
AI makes tasks that used to be “too expensive” feel trivial. So you say yes to things you would have declined or delegated.
“Can you add that dashboard feature?”
“Sure, AI can knock that out.”
“Can you refactor that module?”
“Yeah, should be quick with AI.”
“Can you investigate that performance issue?”
“I’ll have AI profile it.”
Harvard Business Review calls this work intensification: AI doesn’t reduce your workload, it makes you take on more.
You’re not automating your way to free time. You’re automating your way to more commitments.
The Quality-Speed Death Spiral
Here’s how it compounds:
- AI gives you an initial productivity surge
- That surge creates expectations for speed
- Speed pressure leads to cutting corners on review
- Lower quality creates more bugs
- More bugs mean more debugging and rework
- Debugging takes longer because you didn’t write the code
- You fall behind, pressure increases, quality drops further
The Berkeley researchers warned: “The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.”
This is happening right now across teams. Features ship that “work,” but the developers don’t fully understand how. When they break, debugging becomes an archaeology project through code nobody wrote and barely reviewed. That’s not faster. That’s deferred pain.
What This Actually Feels Like
A software engineer named Siddhant Khare wrote about “AI fatigue” last week. It resonated with me immediately because it’s real and nobody talks about it.
AI-era burnout doesn’t look like working 80-hour weeks. It looks like:
Decision fatigue from validating endless AI outputs. Every line might be wrong. Every function might have a subtle bug. You can’t just skim.
Cognitive load from juggling multiple AI-started initiatives. Each one 80% done. None actually shipping.
Imposter syndrome when you can’t tell if you’re productive or just busy. You wrote 3,000 lines this week. Zero features shipped. Are you slow? Or is the process broken?
Anxiety from commitments you can’t estimate. You said 2 days. It’s been 4. The AI generated the code in 20 minutes and you’ve been debugging it ever since.
Guilt for not keeping up. Everyone else seems to be shipping faster with AI. Why aren’t you?
The research shows that some developers see burnout risk drop 17% with AI — but only if their workload doesn’t increase to fill the gap.
In practice? Workload always increases.
The Team Lead’s Nightmare
If estimating one AI-assisted task is this chaotic, imagine coordinating an entire team.
You’re trying to plan a sprint. Every developer gives you an estimate. Half of them are wildly optimistic because “AI will handle it.” The other half are padding heavily because they’ve been burned.
You don’t know which estimates to trust. You don’t know how to aggregate them into a roadmap. You don’t know how to explain to stakeholders why the team that just adopted “productivity-boosting AI tools” is still missing deadlines.
And when the sprint ends? Half the stories are “80% done.” A quarter shipped but with bugs. The rest are stuck in AI-generated complexity no one fully understands.
Tech leads are stuck in the middle. Can’t estimate their own AI-assisted work. Somehow supposed to help the team estimate theirs.
Sprint planning feels like collective guessing. Retrospectives turn into “we don’t know what went wrong, the AI just… took longer than expected.”
The only solution I’ve found is the boring one: go back to basics.
Estimate anyway. Even if it’s totally wrong.
Run retrospectives. Understand what actually happened.
Repeat every cycle. Gather data.
Adjust future estimates based on reality, not hope.
It’s unglamorous. It’s slow. But it’s the only path I see to understanding our true capacity with AI.
You can’t optimize what you don’t measure. And right now, most teams aren’t measuring anything except “we’re using AI, we should be faster.”
After enough cycles, patterns emerge:
- AI features that touch legacy code take 3x longer than expected
- Net-new features hit estimates more reliably
- Code review adds 40% to any AI-heavy story
- Integration work still takes the same time regardless of AI
None of this is in the “AI boosts productivity 39%” headline. But it’s the reality of coordinating a team in the AI era.
The boring strategies we’ve always used — estimate, measure, learn, adjust — they still work. They’re just slower to calibrate now because the variables changed.
What I’m Trying (Not Prescribing)
I don’t have this figured out. Nobody does yet. But here’s what’s helping in my experience:
The One-Thing Rule
Stop saying yes to multiple simultaneous AI features. One thing from start to shipped before starting the next.
Does it feel slower? Yes.
Do you actually ship more? Also yes.
Multiple AI-started initiatives feel like progress until nothing’s actually done. Finishing one thing beats starting five.
Honest Estimates
When someone asks “how long will this take?” stop giving the optimistic AI-boosted number.
Instead: “AI might generate it in an hour. Integration and debugging might take 3 days. Estimate 4 days to be safe.”
It feels slow. But estimates stop slipping constantly. And trust improves.
The Validation Budget
Timebox AI code review. If you can’t fully understand and validate what the AI built in the time it would have taken to write it yourself, don’t use the AI code.
Sounds counterintuitive. But reviewing 800 lines of AI code you don’t understand for 6 hours defeats the purpose. Sometimes writing 200 lines yourself in 4 hours is actually faster.
Measuring What Matters
Stop tracking lines of code or features started. Track instead:
- Features actually in production
- Bugs introduced per feature
- Time from “start” to “shipped” (not “AI generated code”)
- Team stress level (are people sleeping okay?)
The numbers are uncomfortable. But they’re honest.
Using Speed for Quality, Not Quantity
When AI genuinely saves time, spend that time on:
- Better tests
- Clearer documentation
- Paying down technical debt
- Deeper thinking on architecture
Not just “more features.”
The productivity gain is real. The question is: who captures it? If it all goes to “more output for the same salary,” you’re on a treadmill. If some goes to making work better and more sustainable, everyone might actually benefit.
At the Team Level: The Data Discipline
For tech leads and managers, the same boring-but-effective cycle applies:
Estimate — Even when it feels like guessing. Get the team’s best guess on record.
Measure — Track actual time, not AI generation time. Start to shipped, not start to “AI wrote code.”
Retrospect — What took longer than expected? What patterns are emerging?
Adjust — Use the data. If AI stories touching legacy code are consistently 3x estimates, factor that in next sprint.
After 4-5 cycles, you start seeing your team’s actual capacity with AI. Not the theoretical 39% boost. The real number.
It’s slower than anyone wants. But it’s the only path to honest planning.
The Uncomfortable Question
Here’s what I keep coming back to:
The 39% productivity gain might be real. But who’s capturing it?
Your company captures it as more output for the same salary.
Your manager captures it as more ambitious roadmaps.
The market captures it as higher expectations.
You capture… what? More stress? More context-switching? More debugging code you didn’t write?
Unless you actively defend your boundaries, AI productivity tools become productivity traps — a treadmill that speeds up but never lets you off.
I don’t want to sound cynical. AI is genuinely powerful. I use it every day. But the default path is work intensification, not work reduction. And if you don’t choose differently, the default will choose for you.
A Different Path
What if AI augmentation wasn’t about doing more? What if it was about doing better?
What if productivity gains went toward:
- Deeper thinking on hard problems
- Mentoring junior developers
- Paying down technical debt
- Building more resilient systems
- Actually shipping polished features instead of half-finished experiments
- Sustainable pace instead of constant sprinting
The technology is powerful. The question is: who decides what that power is for?
Right now, the default answer is “more output.” But you can choose differently.
You can say no to the fifth simultaneous initiative.
You can give honest estimates instead of optimistic ones.
You can spend AI-gained time on quality instead of quantity.
You can protect your boundaries instead of filling every efficiency gain with new commitments.
The 39% productivity trap is only a trap if you don’t see it coming.
Now you do.
What’s Next
This is still being figured out across the industry. Some weeks teams ship fast and feel great. Other weeks they’re drowning in half-finished AI features and wondering what went wrong.
But patterns are emerging. Data is being gathered. Teams are learning to say no more often. And slowly, the industry is learning to use AI as a tool for better work, not just more work.
If you’re feeling this too — the expectations, the estimation chaos, the validation treadmill — you’re not slow. You’re not behind. The system is broken, and you’re just the first to notice.
The question is: what are you going to do about it?
If this resonated with you, I’d love to hear your experience. Are you in the productivity trap too? What are you trying? Find me on LinkedIn.
Further Reading:
- TechCrunch: “The first signs of burnout are coming from the people who embrace AI the most” (Feb 2026)
- Harvard Business Review: “AI Doesn’t Reduce Work—It Intensifies It” (Feb 2026)
- Fortune: “AI is having the opposite effect it was supposed to” (Feb 2026)
- METR: “Measuring the Impact of Early-2025 AI on Developer Productivity”
- Business Insider: “AI fatigue is real and nobody talks about it” (Feb 2026)