But It Works on My Machine!

AI Is Shifting Our Jobs. Should I Be a Carpenter?

Short Answer

I don’t think so.


The Past (?): When This Was Sci-Fi

I remember being a junior developer in 2018. Back then, the idea of AI agents writing production code felt completely sci-fi. Asking a machine to build features for you or reason about your codebase wasn’t something you seriously considered.

And 2018 wasn’t that long ago, it hasn’t even been ten years!

When GitHub Copilot launched, that was my first real exposure to AI in daily work. It was genuinely useful, especially for boilerplate code, repetitive services, and defining RSpec test scenarios, the kind of stuff that takes time but doesn’t require deep creativity.

I loved that feeling of thinking Copilot was reading my mind. Sometimes I was about to write something and then boom, it autocompleted exactly that. Pretty impressive, huh?

Then Copilot Chat arrived. Now you could ask questions about your codebase, explore solutions interactively, and reason about changes. Tools like Cursor pushed things further. Prompts became shorter. Outputs improved. The hype grew quickly.

And the conversation shifted from:

“This is helpful.”

to

“They took er jobs!”

theytookerjobs


The Hype

For a while — and honestly, this is still happening — it really felt like something massive was unfolding.

Demos looked incredible. Twitter threads claimed models were already outperforming senior engineers. Agents were running overnight, generating entire features with minimal input.

The narrative became clear: you’re cooked, bro.

But actually using these tools in real projects felt far more nuanced.

When you asked for something simple, you sometimes got an overengineered solution — like using a bazooka to kill a mosquito. When you asked for something complex, hallucinations crept in. You could easily spend more time debugging the AI’s misunderstanding of your requirements than building the feature itself.

vibecode

The Current Reality

Today, AI is deeply integrated into many development workflows. It integrates with your tools, generates code, refactors files, writes tests, takes a technical plan and executes it step by step, and can even operate semi-autonomously with multiple agents playing different roles: coder, reviewer, “hacker,” and so on.

And still, I see clear limitations.

AI struggles with subtle UI logic across multiple edge cases. It struggles with designing systems that need to scale cleanly over time. It struggles with making trade-offs that depend on business context and maintaining long-term architectural consistency.

At the same time, I’m not pessimistic about it. Used and configured correctly, AI can dramatically speed up development. Things that took a week a couple of years ago can now take days, and that’s the real sauce behind all of this.

The question is whether that sauce replaces developers.

From what I see, it doesn’t, but it does significantly change what we spend our time on.


What Actually Changed

In my day-to-day work, I write less raw code than I used to. Instead, I spend more time writing tickets, drafting technical plans, reviewing generated code, reviewing code from coworkers (which, like mine, is often AI-generated), validating whether a solution actually makes sense, and sometimes juggling more than one ticket at the same time.

A significant part of my job now is reviewing what an AI agent produced and deciding whether it’s correct, maintainable, and aligned with the codebase.

And that doesn’t feel like the end of development. It feels more like a shift in responsibility.

Being a programmer isn’t just about typing anymore.

The job is moving from the person who types the code to the person who owns what gets shipped. It’s less about producing lines of code and more about understanding, validating, and standing behind what goes into production.

And that leads to what I think is the real risk.

matrix


The Real Risk: Understanding Debt

We’ve always talked about technical debt; rushed decisions, messy abstractions, shortcuts that become painful later.

AI introduces something slightly different: understanding debt.

Understanding debt happens when you ship code you don’t fully comprehend.

With AI, you can generate large amounts of working code quickly. It compiles. It passes tests. It ships.

But:

Imagine a feature that works fine for two months. Then a rare production bug appears. The original engineer has moved to another team. The AI setup has changed. The code technically works, but nobody fully understands why it was structured that way, and the hyped AI agent of the moment starts guessing a lot of things while trying to fix it, without helping much.

Now the issue isn’t just technical debt. It’s that the team lacks deep understanding of what they shipped.

Technical debt makes code harder to maintain. Understanding debt makes ownership fragile.

If developers stop deeply understanding what they build, they lose the core skill that makes them valuable: reasoning about systems over time.


Conclusion

I don’t see developers disappearing.

What I see is a profession that’s becoming more about judgment than typing, more about ownership than output.

AI can generate code. It can accelerate execution. It can automate parts of the workflow. But it doesn’t take responsibility. It doesn’t own production bugs. It doesn’t sit with the long-term consequences of architectural decisions.

We do.

And as long as that remains true, I don’t think we’re going anywhere.