I’ve spent my career fighting for clean code. In code reviews, in architecture meetings, in those long debates about naming conventions that everyone pretends to hate but secretly cares about. Readable code. Well-structured code. Code that respects the next person who has to touch it.
I’m starting to realize that none of that might matter anymore.
Clean Code Was Always a Human Interface
Every clean code practice we follow was invented to solve a human problem.
Descriptive variable names? So a human can read it. Separation of concerns? So a human can navigate it. Consistent formatting, small functions, clear abstractions? All of it — designed to make code convenient for people to write and to read.
The entire philosophy assumes that humans are the primary audience of source code.
But what happens when they’re not?
AI Doesn’t Need Your Clean Code
The more we rely on AI to write, review, and maintain code, the less we actually know the implementation details. And I don’t mean that in a lazy way — I mean structurally. The workflow is changing. You describe what you want, AI generates it, you review the output at a high level, and you move on.
AI doesn’t care about your variable names. It doesn’t need elegant abstractions to understand what’s happening. It processes the entire codebase — messy or clean — with the same indifference. It doesn’t get confused by a 500-line function. It doesn’t lose context the way a human does after scrolling through too many files.
I had a moment recently that made this click. I was reviewing AI-generated code and caught myself leaving comments about naming and structure — the same feedback I’d give a junior dev. Then I paused. Who was I writing these comments for? The AI would regenerate the whole thing from scratch on the next prompt anyway. I was applying human code review instincts to a process that doesn’t have a human on the receiving end (sort of). Old habits addressing a problem that no longer exists.
The practices we built specifically for human readability and human convenience are becoming overhead. In some cases, they’re becoming a bottleneck — extra layers of abstraction that add complexity without benefiting the thing that’s actually doing the reading.
This isn’t a thought experiment. This is already happening in how teams ship software.
The Highest Level Language Is English Now
If readability stops being the priority, what takes its place? Performance.
If AI can handle the complexity regardless, why optimize for human readability when you can optimize for raw execution speed? The ideal language for AI-driven development might not be Python or TypeScript. It might be C. It might be Rust. It might be something even lower level where AI has fine-grained control over memory, threading, and every implementation detail — things that are painful for humans but trivial for a model that doesn’t get frustrated.
We’ve always talked about “high level” and “low level” languages. High level meant closer to human thinking, low level meant closer to the machine. But now there’s a level above all of them.
English. Portuguese. Mandarin. Whatever you speak.
Natural language is the highest level language now. LLMs are remarkable polyglots — they work fluently in all of them. And code? Code is just the compilation target.
We went from writing machine instructions, to writing human-readable code, to just… describing what we want in plain words. Each step abstracted away more control. Each step moved us further from the metal.
We’re Losing Control at Every Layer
It’s not just that AI writes the code. People use AI to plan the work, brainstorm the architecture, make decisions about what to build and how to build it. The entire pipeline — from idea to implementation — is being routed through language models.
And LLMs are dangerously convincing. Their reasoning is well-structured even when the underlying data is fabricated or slightly off. I’ve caught myself reading an AI-generated explanation, thinking “yeah, that makes sense,” only to realize later that a key detail was subtly wrong. Or worse — never realizing it at all. The convincing tone becomes a trap.
You could argue that humans were never perfectly accurate either. Fair. We’ve always built software on incomplete knowledge and best guesses. But there was something grounding about having a person in the loop who had intuition, experience, and skin in the game. Someone who could smell when something was off, even if they couldn’t articulate why.
The more we delegate — not just the coding, but the thinking — the more that instinct fades. And I’m not sure we’re paying enough attention to what we’re losing.
Maybe I’m Too Attached to the Craft
Maybe I’m romanticizing this. Maybe code was always just a means to an end and I turned it into something more than it needed to be. I built part of my identity around writing good code, caring about architecture, treating the codebase as a product in itself. It’s hard to watch that become irrelevant and not take it personally.
Maybe I’m onto something. Maybe the people who cared about the craft will be the ones who notice when the quality starts slipping in ways that AI can’t detect. Or maybe that’s just what I tell myself to feel relevant.
I genuinely don’t know.
And I can’t be a hypocrite about it. This very piece — I’m using AI to help me review it, refine the structure, make sure it reads well. I’m literally writing about the death of human craft while using the thing that’s killing it to help me write better.
But the ideas are mine. The opinions are mine. The discomfort is mine. AI didn’t tell me to feel this way — I felt it, and then I used a tool to articulate it more clearly. There’s a difference between using AI as a tool and being used by it. At least I think there is.
The Mental Model Shift
I don’t have a solution. But I’ve been rethinking how I relate to the work, and that’s helped more than any specific tool or workflow.
The shift is this: if code is becoming the compilation target, then what you’re really building isn’t the code — it’s the system of decisions that produces it. Your taste. Your standards. Your judgment about what good looks like. That’s the actual product now.
And that’s something you can teach to AI.
I’ve been experimenting with this — taking the patterns I’ve developed over years of writing software and encoding them into the tools I work with. Not just “generate a function that does X” but “here’s how I think about error handling, here’s my preference on abstraction depth, here’s what I consider acceptable tradeoffs.” The more specific you get about your own engineering philosophy, the more the output starts to feel like yours instead of generic AI slop.
This isn’t complicated or expensive. The tooling to build your own AI workflows — agents that understand how you work — is accessible today in a way that would’ve been unthinkable two years ago. You don’t need a team or a platform. You need clarity about your own standards and the willingness to invest time in teaching them.
If you’ve spent years developing engineering taste, that taste is now leverage. You can apply it at a scale that was never possible when you had to write every line yourself. More ambitious projects. More complex systems. Things that would’ve required a team, handled by one person with clear vision and the right tools.
It only works if you stay in the driver’s seat though. If you’re the one making the calls about what ships and what gets thrown away. Not a consumer of whatever AI generates, but the lead. The final authority.
And right now, I’m watching a lot of people quietly stop being that.
I Don’t Have a Clean Answer
If language models keep evolving at even half the pace we’ve seen over the last couple of years, the industry in five years looks nothing like it does today. The way we think about programming, about code quality, about what it means to be a software engineer — all of it is up for renegotiation.
I don’t have a neat conclusion. I have a tension I’m sitting with, and I think a lot of developers feel it too even if they haven’t put words to it yet.
Clean code might be dead. The practices, the principles, the carefully named variables and thoughtfully extracted functions — they might genuinely become artifacts of an era when humans needed to read what humans wrote.
But the intention behind clean code? Caring about what you build. Taking pride in the craft. Giving a damn about quality even when no one is looking?
That can’t die. Unless we let it.