We’re Not Being Replaced by AI. We’re Being Asked to Train It.

Meta is installing tracking software on its employees’ work computers.

Not for security. Not for compliance. For training data.

Mouse movements. Keystrokes. Screenshots. All fed into AI models so agents can learn to do white-collar work autonomously. The internal memo told staff they could help by — and I’m quoting here — “just doing their daily work.”

Oh, and Meta is cutting 20% of its workforce next month.

Tell me if this sounds familiar.


The Skill.md Trap

For months, something has felt off to me about the AI tooling push inside companies. Not the tools themselves — I use Claude Code daily. I think persistent agents are fascinating.

What’s felt off is the framing.

“Document your workflow as a skill.”

“Write hooks so the team can reuse your process.”

“Use this tool so we can capture best practices.”

It all sounds like productivity. Like collaboration. Like making the team better.

But step back and look at the sequence:

Step one: Document your workflow. Break it down. Make it repeatable. Turn your judgment into a checklist.

Step two: Run it through an agent. See where it fails. Fix the prompt. Iterate. You’re not “using AI” — you’re teaching it.

Step three: The agent handles 80% of the task. You’re “reviewing output now.” Supervising. Orchestrating.

Step four: The team ships more with fewer people. The work didn’t disappear. The nature of the work changed.

This isn’t a prediction. This is a process already underway.

Here’s what it looks like in practice. Your manager asks you to write a SKILL.md for your onboarding process. You spend three hours breaking down six months of judgment into a checklist. Next quarter, a new hire runs the skill. It works 80% of the time. Your manager starts to wonder what the remaining 20% is worth.

If you see this as a threat, I get it. But I’ve started seeing it differently.


The Evidence Is Stacking

Let’s be specific about what’s happening.

Meta: Tracking software on US employee computers. Keystrokes. Mouse movements. Screenshots. Goal: build agents that perform white-collar tasks autonomously. Parallel action: cutting ~8,000 jobs. Timeline: next month.

Zuckerberg called 2026 “the year that AI dramatically changes the way we work.” He’s spending $135 billion on AI capex. He’s not betting on humans doing the same things.

China: A GitHub project called “Colleague Skill” went viral. It claims to “distill” a coworker’s skills and personality into an AI agent. It was created as a spoof. But it went viral because it’s actually happening — bosses are instructing tech workers to document their workflows so AI can replicate them.

One engineer told MIT Technology Review the process felt “reductive — as if their work had been flattened into modules in a way that made the worker easier to replace.”

Workers are using bleak humor about it. “A cold farewell can be turned into warm tokens.” Someone built an “anti-distillation” tool to sabotage workflow documentation. It got 5 million likes.

The broader industry: OpenAI asking contractors to upload real work products — actual PowerPoints, spreadsheets — for training data. Google’s telling employees AI use will factor into performance reviews. JPMorgan’s telling engineers to “harness AI to save time.” Companies are reorganizing into “AI-native pods.”

The pattern is clear. And yes — the cost savings are a real incentive. Engineering teams are expensive, and any process that can be automated will be. That’s not cynicism. That’s just how business works.

But here’s where I think most people’s analysis stops too early.


The Part Nobody’s Talking About

Here’s the narrative I keep hearing: “AI is coming for your job.”

And here’s the counter-narrative: “AI won’t replace developers. It’ll make them more productive.”

Both are half-truths. And neither is the full picture.

The Pragmatic Engineer’s recent survey of 900+ engineers found that “shippers” — people focused on getting features out — are thrilled. They’re shipping faster. They’re hitting goals. They’re getting promoted.

But the survey also found something else. “Builders” — the people who care about architecture, craft, code quality — are reporting something that sounds a lot like grief.

Identity loss.

One staff engineer described it like this: “I ship more quality code faster. But if the agent has a good handle on the situation, I can give it as much of the tedious parts as I wish.” The tedious parts. The parts he used to love.

Here’s what actually happens. Your coworker ships a feature in two hours with Claude Code. You spend the afternoon debugging it. At 6 PM, you realize you didn’t write a single line of code today. You just reviewed someone else’s AI output.

I won’t pretend that transition doesn’t sting. It does. But I’ve come to believe the grief isn’t about losing our jobs — it’s about losing our identity as the person who writes the code. And that identity was always going to evolve.


Why I See This as a Career Upgrade

Here’s where I break from the doom narrative.

Yes — companies will use AI to cut costs. That’s happening. Yes — the nature of software engineering is changing faster than most of us are comfortable with. That’s also happening.

But look at what’s actually being automated: the mechanical parts. The boilerplate. The repetitive refactors. The grunt work that, if we’re being honest, was never the part that made us great engineers in the first place.

What’s not being automated — and what’s becoming exponentially more valuable — is everything above the code:

  • Judgment. Knowing what’s worth building and what isn’t. Understanding tradeoffs between decisions that look equivalent on the surface but have wildly different long-term implications.
  • Orchestration. Directing multiple AI agents, knowing when to trust their output and when to override it. This is a genuinely new skill category that didn’t exist two years ago.
  • Quality assessment. Evaluating AI output with the same rigor you’d apply to a junior developer’s PR — except the junior never sleeps and produces 10x the volume. Knowing when the output is good enough and when it’s subtly wrong is the new core competency.
  • Security and confidence. As AI generates more code, the attack surface expands. Someone has to understand what’s being shipped, verify it handles edge cases, and maintain confidence in production systems. That someone is more valuable than ever.
  • Architecture and systems thinking. The “what should we build and how should the pieces fit together?” question gets harder, not easier, when you can build things faster. Speed without direction is just expensive chaos.

The math isn’t “AI replaces you.” The math is “AI handles the 80% that was mechanical, and the remaining 20% — the judgment, the taste, the decisions — becomes your entire job.”

That’s not a demotion. That’s a promotion to the work that actually matters.


The Skills That Will Define the Next Era

If the trajectory is clear — and I think it is — then the question isn’t “will my job change?” It’s “am I learning the skills that matter in the new version of this job?”

Here’s what I’m betting on:

Master AI orchestration. Not just “how to prompt.” Learn how agents work. Understand context management, agentic loops, tool use patterns. Know how to break a complex task into pieces an agent can handle and how to verify the assembled result. This is the new version of “knowing your tools,” and it’s just as deep.

Develop judgment you can articulate. “I don’t like this architecture” isn’t useful. “This architecture will cause coupling problems at scale because X, and here’s the tradeoff I’d make instead” is valuable. AI can generate options. It cannot reliably choose between them in context. That’s you.

Learn to evaluate AI output critically. This means understanding when AI code is subtly wrong — not just syntactically, but architecturally. It means catching the confident hallucination that looks right but breaks under load. It means developing an instinct for “this is too clean to be correct.” Treat AI output like a junior developer’s PR: review everything, trust nothing by default, teach as you go.

Understand security implications. More automated code means more automated attack surface. Someone needs to think about what happens when the agent writes a SQL query from user input, or when the scaffolded auth flow has a subtle CSRF gap. Security literacy is about to become a core engineering skill, not a specialty.

Focus on “what to build.” Product thinking. Business context. User empathy. The decision of what’s worth building was always the hardest part of engineering. AI makes the building faster, which makes the deciding more important. Engineers who can bridge the gap between business needs and technical implementation will be irreplaceable.

Build in public. Not for clout — for leverage. Your public work — open source, writing, talks — is proof that you think. That you judge. That you have taste. Agents can’t replicate a reputation. They can’t replicate trust. They can’t replicate the network of people who’ve seen your work and know you get it.


The Honest 5-Year Forecast

I’m not here to sugarcoat this. But I’m also not here to doom-scroll. Here’s what I see.

What’s likely (2026–2031):

Your agent will handle the refactor before you finish your coffee. The boilerplate will write itself. The tests will appear while you’re in standup. You’ll spend your day on the things the agent can’t do: deciding if the feature is worth building, reviewing the output for correctness, and maintaining the architectural coherence of the system.

Junior engineer pipelines will shrink — but the smart companies will realize this is a mistake and course-correct. When a senior with an agent can produce what used to take a team of three, the temptation is to hire fewer juniors. But the companies that cut the pipeline entirely will find themselves with no mid-levels in five years and no seniors in ten.

“AI-native” companies will run leaner engineering teams. Not zero. Leaner. The engineers who remain will be more senior, more capable, and more valuable than today’s equivalent. Less typing, more thinking.

What’s possible:

“AI orchestrator” becomes a recognized career path. Not “prompt engineer” — that’s already outdated. Orchestrator. Someone who designs multi-agent workflows, establishes trust boundaries, builds evaluation frameworks, and maintains quality at scale. It’s the natural evolution of senior engineering.

Engineers who master this transition become force multipliers — one person doing what used to take a team, not because they work harder, but because they direct better. The ceiling goes up, not down.

Companies that over-automate without judgment will degrade. Architecture drifts. “AI slop” — low-quality code shipped by agents without proper review — compounds until someone expensive has to untangle it. This creates a premium for engineers who can maintain quality.

What’s unlikely:

Full replacement of engineers who think. Judgment, cross-team coordination, and “what should we build?” don’t fit into prompts. The “why” is harder to extract than the “how.”

Complete deskilling. Someone has to understand what the agent is doing. Someone has to know when it’s wrong. That someone needs deep knowledge — not just prompting skill.


The Reframe

I’ll be straight with you. The Meta stuff is real. The cost-cutting is real. The economic incentive to automate engineering work is enormous — $135 billion from Meta alone, 20% workforce cuts, the promise of “insurmountable cost advantages.”

Companies will push this. Hard. And some engineers will get displaced in the transition.

But here’s what I actually believe: this isn’t the end of software engineering. It’s the biggest career upgrade the profession has ever seen — if you’re willing to evolve with it.

Every time the tools got better, the work got more interesting. We went from writing assembly to writing high-level code. From managing servers to designing cloud architectures. From hand-rolling SQL to building data pipelines.

Each transition killed some jobs and created better ones. The pattern isn’t “developers get replaced.” The pattern is “the floor rises.”

The developers who learned to use compilers instead of writing machine code didn’t lose their jobs. They built operating systems. The developers who learned cloud instead of racking servers didn’t become obsolete. They built Netflix.

The developers who learn to orchestrate AI, judge its output, maintain security and confidence in automated systems, and focus on the hard questions — “what should we build?” and “what are the tradeoffs?” — won’t be replaced either.

They’ll be the ones building whatever comes next.


What To Do About It

Here’s where I get practical.

Stop worrying about being replaced. Start mastering the new skills. The anxiety is understandable but unproductive. Channel it into learning. Pick one AI coding tool and get dangerously good at it. Understand orchestration, evaluation, and trust boundaries.

Own the judgment layer. The “how” is being automated. The “why” isn’t. The “what should we build?” The “what happens when this scales?” The “what are we not seeing?” These don’t fit into skills or hooks or prompts. They’re the layer above automation, and that’s where the value concentrates.

Learn the tools deeply, not broadly. Don’t spread yourself across five AI coding agents. Pick one. Go deep. Understand how it manages context, how its agentic loop works, how to structure projects for it. Deep literacy in one ecosystem beats shallow familiarity with five.

Build in public. Your public work is proof that you think. Agents can’t replicate a reputation or the trust that comes with it.

Watch the economic incentives — but don’t just watch. When a company spends $135 billion on AI and cuts 20% of staff, they’re executing a strategy. Know which side of that strategy you’re on. Position yourself as the person who directs the AI, not the person whose workflow the AI is learning.


The Question I Keep Coming Back To

I’ve been thinking about this a lot lately.

The question isn’t “will AI replace software engineers?”

The question is: “Are you learning the skills that make you irreplaceable?”

Every skill we write. Every hook we configure. Every workflow we document. Yes — we’re encoding knowledge that makes certain tasks automatable. That’s real.

But the knowledge of when to write that skill, which workflow is worth encoding, what the tradeoffs are, and whether the output meets the bar — that’s not going into a prompt anytime soon.

That’s the job. The new version of the job. And it’s a better one.


Final Thought

I still use Claude Code. I still think agents are fascinating. I’m still shipping features with AI help.

But I’m not looking at my keyboard with dread anymore.

I’m looking at it as a tool that’s about to get a massive upgrade — and so is every engineer who decides to learn rather than fear.

The companies are going to automate what they can. That’s not a question. The question is whether you’ll be the engineer who gets automated, or the one who does the automating.

I know which side I’m choosing.

And I think most engineers — the ones who care enough to read something like this — will choose it too.


What do you think? Are you seeing this as a career upgrade or a career threat? I’d genuinely love to hear your perspective — especially the skills you’re betting on for the next five years.


Sources

New posts in your inbox. No spam, just signal.

Leave a Reply

Your email address will not be published. Required fields are marked *