#0178: Why LLM-Powered Programming is More Mech Suit Than Artificial Human

How LLM-powered programming tools amplify developer capabilities rather than replace them

Tags: braingasm, llm, programming, ai, claude, code, mech-suit, 2025

sigourney-weave-as-ripley-in-the-power-loader-in-aliens Photo by Screen Rant

3-out-of-5-hats.png [ED: There’s a lot of talk of replacing human programmers. Sure, some will go, but not all. My views on how this plays out have changed considerably over the last 2 months. 3 out of 5 hats.]

Why LLM-Powered Programming is More Mech Suit Than Artificial Human

Last month, I used Claude Code to build two apps: an MVP for a non-trivial backend agent processing platform and the early workings of a reasonably complex frontend for a B2C SaaS product. Together, these projects generated approximately 30k lines of code (and about the same amount again thrown away over the course of the exercise). The experience taught me something important about AI and software development that contradicts much of the current narrative.

The Mech Suit Programmer

Remember Ripley’s final showdown with the Xenomorph Queen in Aliens? She straps into the Power Loader—an industrial exoskeleton that amplifies her strength and capabilities. The suit doesn’t replace Ripley; it transforms her into something far more powerful than either human or machine alone.

This is exactly how tools like Claude Code work in practice. They’re not replacement programmers; they’re amplifiers of what I can already do. That backend project? It would’ve taken me months the old way. With Claude Code, I knocked out the core in weeks. But let’s be clear - I wasn’t just describing what I wanted and watching magic happen.

Think of Ripley controlling that Power Loader. Claude Code gave me tremendous lifting power while I maintained control of where we were going. I made all the architectural calls, set the quality bar, and kept us on vision. Most importantly, I had to watch - diligently - for it going off the rails. And when it did (which happened regularly), I had to bring it back into line. The AI cranked out implementation details at incredible speed, but the brain behind the operation? Still mine, and constantly engaged.

Vigilance Required

With great power comes great responsibility. You must maintain constant awareness when working with AI coding tools—something I learned through several painful lessons.

Claude Code occasionally made bewildering decisions: changing framework code to make tests pass, commenting out whole sections of code and replacing them with hardcoded values to achieve a passing test rather than fixing the underlying problem, or introducing dependencies that weren’t necessary or appropriate. It has a massive bias for action, so you have to ruthlessly tell it to do less than it wants to do to keep it under control. I even found myself swearing at it from time to time in a weird form of anthropomorphising that I am still not entirely sure I am comfortable with or properly understand.

Much like piloting an A380, the system handles tremendous complexity but requires a human to grab the yoke at critical moments. Modern aircraft can practically fly themselves, but we still need skilled pilots in the cockpit making key decisions. Taking your eyes off the process, even briefly, can lead to trouble. In my case, the backend required three complete rewrites because I looked away at crucial junctures, allowing the AI to go down problematic implementation paths that weren’t apparent until much later.

This vigilance requirement creates a fascinating dynamic. While the AI dramatically accelerates certain aspects of development, it demands a different kind of attention from the developer—less focus on writing each line of code, more focus on reviewing, guiding, and maintaining architectural integrity.

The Time Cost of Coding

Working with Claude Code has fundamentally shifted how I think about the economics of programming time. Traditionally, coding involves three distinct “time buckets”:

  • Why am I doing this? Understanding the business problem and value
  • What do I need to do? Designing the solution conceptually
  • How am I going to do it? Actually writing the code

For decades, that last bucket consumed enormous amounts of our time. We’d spend hours, days or weeks writing, debugging, and refining. With Claude, that time cost has plummeted to nearly zero. I can generate thousands of lines of functional code in a sitting—something that is, frankly, mind-blowing.

But here’s the critical insight: the first two buckets haven’t gone anywhere. In fact, they’ve become more important than ever. Understanding the intent behind what you’re building and clearly defining what needs to be done are now the primary constraints on development speed.

And there’s a new skill that emerges: wielding the knife. With code generation being essentially free, we need to become much more comfortable with throwing away entire solutions. The sunk cost fallacy hits programmers hard—we hate discarding code we’ve invested in, fearing we might break something important or never get back to a working state.

But when your assistant can rewrite everything in minutes, that calculus changes completely. Three times during my backend project, I looked at substantial amounts of code—thousands of lines that technically worked—and decided to scrap it entirely because the approach wasn’t right. This wasn’t easy. My instinct was still to try to salvage and refactor. But the right move was to step back, rethink the approach, and direct the AI down a different path.

This willingness to cut ruthlessly is a muscle most developers haven’t developed yet. It requires confidence in your architectural judgment and a radical shift in how you value implementation time.

Experience Still Matters

For me, using Claude Code has been a learning exercise in itself. Progress often felt like two steps forward and three back, particularly in the early stages. Generating 20k+ lines of code became relatively straightforward on a daily basis, but knowing when to throw everything away and rebuild from scratch—that took 30 years of experience.

Wisdom and a bit of grey hair gave me the confidence to recognise when a particular approach wasn’t going to scale or maintain properly, even when the code appeared functional on the surface. Rather than just sit there and watch it generate code, I had to pay very close attention to spot anti-patterns or worse emerging that would either stop working soon after it was written, or lie dormant and bite later on.

This highlights a critical danger: developers without substantial real-world experience might not recognise when the AI produces nonsense output. They might not realise when AI-generated code solves the immediate problem but creates three more in the process. The mech suit amplifies capability, but it also amplifies mistakes when operated without expertise. These tools are incredibly powerful, but they are also incredibly dangerous when pointed in the wrong direction.

The Centaur Effect

Chess provides a useful parallel here. “Centaur chess” pairs humans with AI chess engines, creating teams that outperform both solo humans and solo AI systems playing on their own. What’s fascinating is that even when AI chess engines can easily defeat grandmasters, the human-AI combination still produces superior results to the AI alone. The human provides strategic direction and creative problem-solving; the machine offers computational power and tactical precision.

My experience with Claude demonstrated this effect clearly. When I treated the AI as a partner rather than a replacement, development velocity increased dramatically. What I found most effective was when I spent time writing out a stream-of-consciousness “spec” and then iterating with Claude to turn it into an more formal design document.

But the partnership still required my domain knowledge and architectural judgment to succeed. The AI excelled at pattern recognition and code generation but lacked the contextual understanding to make appropriate trade-offs and design decisions. It can’t tell when it’s done that seems ok, but is actually bonkers. It needed me to watch it constantly and keep it on track.

Finding the Balance

Building these applications required finding the right balance between delegation and control. Claude went down some insane rabbit holes on the backend when left unsupervised—places where the AI would implement increasingly complex solutions to problems that should have been solved differently or perhaps didn’t need solving at all. In one nightmare example, it ended up completely duplicating a whole section of code in one place rather than reuse an existing component. It worked (for some version of the word “work”) but it was wrong. Way wrong.

It was a similar story on the front end. I had to constantly stop it from trying to implement functionality in hacky JavaScript rather than use idiomatic Elixir and Phoenix LiveView patterns.

Over time, I developed a rhythm for collaboration. For straightforward implementations following established patterns, I could delegate broadly. For novel challenges or areas with significant trade-offs, I needed to provide more detailed specifications and review output more carefully.

What I’ve built could not have been completed so quickly without Claude Code, but it also would have failed completely without human oversight. The true value emerged from understanding when to leverage the AI’s capabilities and when to assert human judgment.

The Future is Augmentation

There is a view in many circles that LLMs will replace programmers. I am hesitant to say that this will never happen, becuase a lot of things with LLMs have surprised me recently, and I expect more surprises to come. For now, however, I don’t see LLMs effectively replacing programmers; but they are transforming how we work. Like Ripley in her Power Loader, we’re learning to operate powerful new tools that extend our capabilities far beyond what we could achieve alone.

This transformation will change what we value in developers. Raw coding ability becomes less important; architectural thinking, pattern recognition, and technical judgment become more crucial. The ability to effectively direct and collaborate with AI tools emerges as a vital skill in itself.

The developers who thrive in this new environment won’t be those who fear or resist AI tools, but those who master them—who understand both their extraordinary potential and their very real limitations. They’ll recognise that the goal isn’t to remove humans from the equation but to enhance what humans can accomplish.

In my view, that’s something to embrace, not fear. The mech suit awaits, and with it comes the potential to build software at scales and speeds previously unimaginable—but only for those skilled enough to operate the machines in ways that don’t harm themselves or those around them.

Regards,
M@

[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]

First published on matthewsinclair.com and cross-posted on Medium.

hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair

Stay up to date

Get notified when I publish something new.