Blog #0171: 20240619 Blog 0171 Will LLMs Ever Completely Take Over Coding from Human Software Engineers?

Writing software is 20% syntax and 80% wisdom. Which bit do you think LLMs are good at?

Tags: braingasm, podcast, knwowledge, creativity, ai, gen-ai, generative, software, programming, engineer

christina-wocintechchat-com-bPVM4nOy0Rg-unsplash.jpg

Photo by Christina @ wocintechchat.com on Unsplash

1-out-of-5-hats.png [ED: This post poses a question about the likelihood of LLMs ever fully replacing software engineers. Unfortunately, it doesn’t really answer the question: 1/5 hats]

A thought occurred to me today while chatting with someone about Large Language Models (LLMs):

LLMs are auto-correct on steroids when they work, and auto-correct on LSD when they don’t.

Or maybe it should be PCP rather than LSD? Anyway, this conversation led to an intriguing question:

Will LLMs take over coding?

Here’s my take:

No, not as things stand.

Coding is 20% syntax and 80% wisdom. LLMs excel at syntax but struggle with wisdom. Wisdom involves reasoning, and the real question is whether LLMs can reason. As of now, the answer is a straightforward ‘no’. Well, that is for de novo code that there are not worked examples that exist in the training data. Can it generate solid test code? Yes. Can it even write function-level documentation? Again, yes. But net-new at the scale and complexity of a non-trivial enterprise or SaaS back-end? Nope. Well, not yet, and not without human help.

However, will they be able to reason in the future? This is a more complex question.

There are three possibilities to consider:

[1]. LLMs will never be able to reason because something fundamental about human-like reasoning eludes algorithmic processing in a Von Neumann architecture machine. We don’t know what that fundamental limitation might be or even if it exists. Part of the reason we don’t know is that we don’t fully understand how human reasoning works. Therefore, it’s hard to determine if another system is replicating our reasoning process.

[2]. LLMs will be able to reason, but this will require one or more currently unknown innovations in the way LLMs and the machines they run operate. Currently, we don’t know what these innovations might be. By analogy, the transition from valves to transistors sparked a massive disruption in computing, delivering enormous value to society. Is a similar disruption possible that will take us from the transformer architecture of today’s LLMs to a new, revolutionary paradigm? It’s uncertain but possible.

[3]. LLMs will be able to reason relatively soon. Developments in machine reasoning (such as Q*) are almost ready for prime time. We may see these advancements rolled out in commercial LLMs in the next few years, or perhaps even sooner.

The probability of at least one of these three propositions being true is 100% (ie P1 + P2 + P3 = 1). However, how we allocate probabilities to each scenario so that they sum up to 1 remains an open question.

My beliefs on this topic vary wildly, sometimes even throughout the same day.

What do you think?

Regards,

M@

Also cross-published on Medium.

[PS: Hat tip to Craig Ford for some edits and for sparking the conversation in the first place.]

[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]

Stay up to date

Get notified when I publish something new.