Don't panic, impl Things

A blog by Martijn Gribnau

08 Mar 26

The Middle Ground

Like many, I too have been spending much time thinking about (generative) AI and its values. In my field of work (Software Engineering), on online forums such as Hacker News and in blog posts, generative AI has become an almost polarized topic between the believers and the unbelievers. Of course there's a middle ground, yet as it often is, the outer ends are the most vocal, and usually are most certain of their position. I suspect this divide maps onto something deeper: how people perceive the value of their work. To some, code and software are just tools; means to an end. They're the makers, the pragmatists of our field. On the other end you have the people who do not just care what they're building, but also how it is built. They're craftspeople12, perfectionists who care as much about how something is built as what it does.

In companies and larger projects, I'm convinced you need both. Craftspeople tend to spend (too) much time on details. Some details matter, but not all. Usually, proceeding forward is necessary both for economical reasons, but also to learn more about the real requirements of the product. That's where makers come in. They are better at making difficult decisions, even with incomplete details. However, they also tend to "just make it work", even if that means using suboptimal solutions and architecture. In turn, future additions can be chaotic and more difficult, and in the longer term, time consuming; it can also mean it'll require more maintenance3. Where the balance rests depends on many factors. What do you produce? Are you responsible for maintenance? Do your customers care about what maintenance will cost them, or how much time it will take before a product must be replaced?4

When searching for a balance, it's unusual to find easy answers. However, I'm convinced that the answer is almost never at the edges of the spectrum. When in doubt, choose some kind of middle ground. This is why the polarised AI debate frustrates me. On one side, people want to use AI to build as much and as fast as possible; the initial outcomes are all that matter. On the other, people refuse to engage with AI at all and dismiss it outright. Neither extreme leaves room for the conversation that actually matters (if we assume it is here to stay): when does it help, when doesn't it, and how do we work well regardless? The answer is unlikely to be found at either extreme.

1 Like almost anything, almost nobody is completely on one side or the other; everything is a spectrum. Yet, no argument can be made if you need to deal with infinite exceptions. Every model is flawed in some way.

2 I also wonder if, and how, any of the color personality models relate to the stance of people with respect to their opinion of generative AI. I would hypothesize that makers (pragmatists) skew red or yellow (i.e. action oriented, results focused, impatient with details), and that craftspeople (perfectionists) skew blue or green (i.e. methodical, quality focused, slower to act).

3 Software projects are rarely ever done. After new features are added, maintenance is required for security updates, to keep internal and external services compatible, and to remove overhead. In that respect, software is not different from hardware, whether it's an aircraft, boat, bike, or a server rack, a graphics card or even shoes. Everything wears down. The type of maintenance changes, but not the need.

4 If you (or a competitor) can write a new version or competing product at a fraction of the cost, it affects the sustainability of software businesses. Generative AI might be exactly this kind of shift and if so, it will affect where the balance rests.