Profile photo for John Ohno

The primary task of a programmer is not to program, but to understand (in detail) the intent of whoever is in charge of the project. Most projects are not specified well enough in the planning stages to be unambiguous — a programmer that doesn’t push back against the project manager will inevitably produce code that matches the requirements but is totally unlike what is intended, and likewise for any future AI.

We have had programs that write programs for seventy years. We have had AI that ‘intelligently’ writes programs (in the form of optimizing compilers) for fifty, and we have had AI that writes programs based on requirements (so-called ‘provers’, ‘planners’, and ‘solvers’) for forty-five. In other words, in important ways, programming already is automated — and much of the work programmers do is either in creating automation for programming or in working around the problems produced by poorly-designed automation. These tools are very useful, and I wish programmers used them more often, but in order to get reasonable results out of them, a human being needs to be able to clarify requirements.

There are cases where a substantial amount of this clarification can be performed by the program, in which case a non-technical user or project manager can do that work themselves. (For instance, GUI builders & similar point-and-click systems for generating fairly complicated code based on fairly uniform needs use the uniformity of the domain to make otherwise fairly arcane tasks straightforward enough that non-technical users can perform them.) However, the cases that have not already been automated this way are not automated because they are hard. Specifically: the difficulty in disambiguating certain requirements is because, even though they are very important, the original designer has not thought about them (and is unaware of them) — but nevertheless, a safe default is not possible or must be determined from context that the software system doesn’t have access to.

If AI improves in this domain (which it isn’t really doing — most AI work is not going into provers but into statistical learning, which is basically a matter of identifying or duplicating observed behavior in a shallow way, & to the extent that provers are getting better more than incrementally, it’s being done by the formal methods crowd & the type theory crowd) it’ll be in terms of slowly improving the normal-case behavior of certain kinds of tools intended for programmers so that programmers don’t need to specify as much ‘obvious’ stuff, and over time the required skill level for programmers will go down.

But, we’re pretty far from that in most domains, in part because programmers largely prefer using tech and methods from the early 70s and because early decisions about platforms and such have locked programmers into antique methods, but also because most professional developers have a profound ignorance of the history of the field that prevents them from progressing on this kind of work. As developers, we tend to refuse to use time-saving and effort-saving techniques invented after 1975, and so the projects pushing forward program synthesis tech are in their own little ghetto where everybody knows haskell and idris, inaccessible to and unobserved by most developers, let alone non-technical users and project managers.

TL;DR: AI isn’t doing much for program synthesis, and nobody uses old tech that would save lots of time and effort in that domain, and neither of these things has changed in fifty years so don’t count on it changing in the next four.

View 100+ other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025