No Developer Feels AI Literate Right Now — Not Even the Ones Building It

There’s a specific kind of anxiety that hits at 11 PM when you’re scrolling through someone’s thread about the AI workflow that supposedly changed everything. You were productive today. You shipped code. But now you’re wondering if the way you shipped it is already obsolete.

That feeling? It’s not going away. And I say that as someone who builds AI features every day at my job — user-facing products, developer tools, infrastructure — and uses AI to build them too. I’ve been deep in this flow for a while, and I still don’t feel AI literate. Nobody does.

That’s the whole thesis.

The Illusion of AI Fluency

There’s a dangerous gap forming right now — the distance between “I can get AI to produce code” and “I understand what’s happening well enough to make consistent, reliable decisions.”

It’s like ordering coffee in another language and thinking you’re fluent. The demo works. But what happens when the context window fills up and the model starts hallucinating? When the tool you’ve been relying on ships a breaking change to how it handles context, and your entire workflow stops working?

That’s where actual literacy lives. Not in the output — in understanding the mechanics well enough to troubleshoot, adapt, and make real decisions.

The Arc: Convergence to Divergence

Here’s a pattern worth naming, because it explains why everything feels so chaotic.

For the past couple of years, AI coding tools followed roughly the same trajectory. First the conversational phase — chat with a model, get code back. Then the agentic phase — let the model execute actions, read files, run commands. Then context management became the bottleneck — RAG, context windows, retrieval strategies. Then skills and MCPs emerged as ways to extend what agents could do.

Every major tool went through these same stages. Claude Code, Cursor, Copilot, Codex — the patterns were recognizable across all of them. If you learned one, the mental models transferred.

That’s no longer true.

The tools are diverging. Fast. Claude Code now has hooks, subagents, trust modes, and a growing ecosystem of skills. Cursor has its own rules system with a fundamentally different interaction model. Codex has AGENTS.md. Amp, OpenCode, and a dozen others are carving their own paths.

Each tool is developing its own opinion about how development should work. And those opinions are starting to meaningfully diverge.

This is React vs Angular vs Vue all over again — except the stakes are higher. That was about which UI library renders faster. This is about how you think, plan, and build software at a fundamental level.

The Best Practice Treadmill

A few weeks ago, the developer community was buzzing about skills replacing MCPs. Skills were simpler, lighter, didn’t require running separate processes. The consensus was forming: skills are the future, MCPs are the past.

Then optimizations changed the calculus on MCP context consumption. Suddenly MCPs were more viable again. The narrative flipped.

Now? People use a mix of both. Some skills actually wrap MCPs internally. Nobody’s sure if that’s a good pattern or an anti-pattern.

This all happened in the span of a few weeks. And it’s the new normal — the “right way” to structure your AI workflow has a half-life measured in weeks, not months.

So What Does This Mean for Your Career?

If senior engineers with years of pattern recognition and deep technical foundations feel lost — what does this mean for someone finishing college right now? Or someone transitioning into tech?

I’ll be direct: it’s harder than ever to get a job as a software engineer. You don’t just need to know how to code anymore. You need to know how to code, how to work with AI, which AI tools to invest in, and how to recognize when current practices expire. Most bootcamps and university programs haven’t even begun to address this. And companies don’t know what to test for either — interview loops are still measuring skills from two years ago while the actual job involves orchestrating agents and making architectural decisions AI can’t make for you.

So the question people are asking — “Is it even worth learning software engineering right now?” — is genuine.

Here’s what I think: the answer is yes. But the approach has to change.

The demand for engineers isn’t dying — job openings have surged this year, and companies that replaced senior engineers with juniors-plus-AI are already course-correcting. Human judgment, architectural thinking, and the ability to make sense of complex systems still matter. But you can’t just learn to code and expect that to be enough anymore.

Pick One Tool. Get Dangerously Good at It.

Stop trying to learn every coding harness. Don’t split your attention between Claude Code, Codex, Amp, OpenCode, and whatever drops next week. Pick one. Commit to it. Go deep.

There’s research backing this up. BCG found that productivity increases with one or two AI tools, peaks around three, and actively drops when you add a fourth. They’re calling it “AI brain fry” — more tools means more context switching, more cognitive load, worse outcomes. Mastering one tool isn’t just a preference. It’s the strategy that actually works.

I’ll tell you what I use: Claude Code. It has the richest set of capabilities right now — hooks, skills, subagents, MCP integrations, trust modes — and Anthropic’s models consistently deliver. The community around it is the most active I’ve seen. That could change in six months. But the point isn’t really the specific tool.

When you deeply learn one tool — when you understand how it manages context, how its agentic loop works, how to structure your projects for it — you develop transferable mental models. You learn what “good context management” means, not just how one tool implements it. You learn why hooks exist, why skills exist, why MCPs exist. Those patterns survive the churn.

If the tools diverge to the point where switching becomes necessary, you’ll transition from a place of strength — deep literacy in one ecosystem — rather than shallow familiarity with five.

Build Something That Felt Impossible

Theory doesn’t stick without practice.

Think of a project you’ve always wanted to build but never had the time. Something slightly out of reach — too many moving parts, too much boilerplate, too many unknowns. Now try to build it with your AI coding tool of choice.

Here’s my example. I’d been wanting to rebuild my blog with a custom WordPress theme for months. I knew exactly what I wanted — the design, the deployment pipeline, the git integration. What I didn’t have was the time to write it all out.

So I started prompting my agent harness while at the gym. Between sets, between exercises — describing what I needed, reviewing what came back, steering it. Within about three days of gym sessions and prompting, the new blog was live.

That was a genuine wow moment. Not the viral demo kind — the personal kind. It didn’t come for free. I knew what to ask for, which tools to use, how to set up the deployment. But I never had to remember WordPress internals or look at a single line of code.

Don’t aim for perfection. Just try to get it done and pay attention to how it feels. You’re going to land in one of two places.

You build the thing faster than you ever could have alone. Maybe the code isn’t pristine. Maybe you restarted a couple of times. But it works, and you built it in hours instead of weeks. That wow moment is fuel — it motivates you to refine your prompts, learn the next layer, keep pushing.

Or you struggle. The agent loops. It misunderstands your intent. You feel like you’d be faster doing it yourself.

If you land here, don’t get discouraged. And don’t become someone who dismisses AI as useless based on a bad first experience.

What almost certainly happened is that your scope was too broad. You gave the agent a vague, ambitious prompt and expected it to figure out the details. That’s the most common mistake starting out.

The fix: shrink the scope. Way down. Pick the smallest piece — a single endpoint, one component, a basic data model — and try again. Get one small thing working. Feel what it’s like when the tool actually helps. That’s your baseline. From there, you gradually expand — bigger scope, better context, more trust in the loop.

The Uncomfortable Truth

Nobody has this figured out. Not the senior engineers. Not the tool makers. Not the influencers who post their workflows like they’ve cracked the code.

The developers who are going to thrive aren’t the ones who memorize every feature of every tool — they’re the ones who build a learning rhythm they can sustain. Context management, agentic workflows, prompt design, scope control. These patterns are more stable than the specific implementations, and they’re what make you dangerous regardless of which tool you’re holding.

Pick one tool. Build one thing. Learn one lesson at a time.


References

New posts in your inbox. No spam, just signal.

Leave a Reply

Your email address will not be published. Required fields are marked *