There are lots of dimensions to “computer programming” and “programming languages”, so simple sorting won’t be satisfactory. But there is also a dilemma — I think — in that “something happened” in the 80s to what were major improvements every 11 years or so. In 1984 I was asked to write the lead article for the old Scientific American for a special issue on “Computer Software”, and included the following diagram:
The article noted with tongue in cheek that the transition from “a better old thing” to “almost a new thing” and then “a new thing” happened at about sun-spot intervals …
Lisp shows up both as a transitional language and as a “new thing” because more than most computer languages it appears as a kind of “material” as much as “a programming style”.
In 1984 I picked several other things that I thought were transitional. For example, the advent of spreadsheets brought up the possibility of massively parallel “swarm” programming, and I devoted some of my article to discussion about what Moore’s Law would allow. I thought the other promising direction would be “more meaning” in programming, and especially programming languages and systems that could do problem solving on the behalf of the larger goals for the programming. This was inspired by what Sketchpad had been able to do for numerically constrained relationships, and extended the idea to “requirements” and “specifications” becoming “runnable” and “debugable”. I said less about this in the article.
If we look at this today, 35 years later, it is hard to fit what happened into the ascendant qualitative levels I used for the first 35 years of programming. We can see that though Simula was one of the inspirations for Smalltalk, its transitional model remained as C++. The Planner to Prolog to Eurisko line petered out in programming, but did form a major part of a real “meaning based” system, Doug Lenat’s CYC. Interestingly, the “swarm object pattern matching” ideas did get reified by Joe Armstrong’s Erlang — he started independently with his own instincts and goals — and this language is quite active today, though perhaps not mainstream.
To try to be as brief as possible here, the surprising thing to me over the last 35 years has been the lack of “real engineering vigor” in “software engineering”. While other engineering fields were starting to use CAD and SIM to design and vet their designs — often on supercomputers — the shoemaker’s children wound up with no shoes with so much software development retaining poor and weak methods from the past and staying with simulated punched cards and non-live development.
And in the last 15–20 years, the real engineering fields have been increasingly able to add automated “FAB” to the CAD<->SIM process and derived from it. In software engineering, this would mean being able to automatically move from vetted designs on supercomputers to optimized systems that would work on commodity machines. One part of this optimization has been done — it does not use “meaning” but is more of a Moore’s Law device: namely the tracing JITers that are being used more and more. The seed of this technique was pioneered by the legendary Peter Deutsch for Smalltalk systems in the early 80s, and, today, has been taken far enough to be really useful along many dimensions.
As for programming itself, the rallying cry I’ve tried to put forth is: “It’s not BIG DATA, but BIG MEANING”. In other words, the next significant threshold that programming must achieve is for programs and programming systems to have a much deeper understanding of both what they are trying to do, and what they are actually doing. That this hasn’t happened in the last 35 years is a really unfortunate commentary on the lack of a maturation process for computing.