How does AI change software development? Code writing becomes cheaper, judgement doesn't
Posted: (EET/GMT+2)
OpenAI's ChatGPT was launched in November 2022, just over three years ago. Tools like OpenAI Codex, GitHub Copilot and Claude Code have made it increasingly clear that writing software code fully manually is no longer commercially sustainable for large classes of problems.
So then, if the cost of writing code goes towards zero, the bottleneck must move elsewhere. It's unclear where exactly, but good candidates are: specifications, design and testing. Writing code is about producing behavior. Design, specification, and testing are about deciding which behavior is acceptable and which failures cannot be tolerated.
For example, generic AI cannot, at least at this writing, answer questions like:
- Who decides what to build? And what not to build?
- Why do we need something in the first place?
- What is the cost of failure?
AI can propose solutions, but it cannot own the consequences of deploying them. It cannot decide which risks are acceptable, which failures are existential, or which ideas are simply not worth building at all. These qualities are still in the human realm and hopefully continue being so in the future as well.
What's your opinion?