Like “Computer Science” — which had a very different and much better meaning when first coined in the 60s (in part, it represented real aspirations towards finding the strongest notions of itself) — “Object-Oriented Programming”, when I coined the term about 50 years ago, also had a different set of meanings and aspirations.

Today in computing, we find ourselves in situations magnified many orders of magnitude by Moore’s Law and the success of the inventions of Personal Computing and the Internet. I think it’s worth trying to think things through carefully rather than (a) trying to deal with the current conceptions of “OOP”, and/or (b) going back rigidly to much of what was so powerful in the computing milieu 45 years ago.

In rethinking things, we find some old friends in ideas — such as protected modules that are “whole computers”, non-command messaging, requirements and constraints, transactions, “before-and-after”, meta-levels, separating meanings from methods, “objects” as “servers”, and so forth. And many of the old dangers: race conditions, indeterminacy, scaling, reformulation, hopeless (and needless) complexities, and many more.

We are still faced with the large problems of design at too many levels, because it is rare that each concern and requirement can be satisfied in complete isolation, and the number and kinds of degrees of freedom that seem to be needed preclude much of classical mathematical treatment in favor of building and debugging.

There are a number of truly important ideas — many from the “deep past” in our field — that need to be comprehensively understood and pondered — both for intrinsic beauty, and to ask what they mean for today.

For example, the first completely startling-system-with-objects that knocked me on the head 50 years ago was Ivan Sutherland’s Sketchpad, already 4 years old. The “entities” in Sketchpad were mostly graphical — they showed up on the display as “things” made of “lines” that were made of “end-points” that had “x-values” and “y-values”, but they were not “data structures” (they were “behavioral” and the Sketchpad programmer could not do anything like an “assignment statement”).

Instead, Sketchpad was “programmed” by a combination of hand constructed “objects” whose behaviors were “impressed” on the objects in terms of “constraints” (which were the dynamic requirements for each object). The Sketchpad system itself dynamically “solved” the intertwined requirements — and this let the “programmer” think in as linear terms as possible to allow most brain-cells to be used for the difficult problems of design and purpose.

The field back then — and this author — were not up to really carry this model forward — instead, we found ways to approximate some of the ideas, but at real cost to the integrity of aim that Sketchpad brought. There were a few important exceptions over the years.

But today, it is possible to really address these important ideas about “designing and programming in requirements” with complete separations of “tuning” and “optimizing”.

A lot of the best systems in the future will be a lot more like Sketchpad in approach than most systems today. We need to work to make this happen!

There are a number of other really important ideas from the early 60s that have missed becoming part of our basic tools and thoughts today (partly from the faddism that has always been rife in computing, partly because our not-quite-a-field cares no more about history than any manifestation of pop-culture, etc.).

A good example is how John McCarthy in the early 60s was able to both advance states in time but without race conditions or violating “logical and functional relationships”. He called the mechanism “fluents”. Today, one of the terms used for this is “computing in pseudotime”. The idea should be familiar: instead of destructively changing things, retain a history of the changes going forward, each new event representing an increasing point in pseudotime, which becomes an obligatory parameter on every object — the aim is consistency of relationship for each pseudotime. With a few more niceties we wind up with a universal use of “atomic transactions”, “versions”, etc.

It should be clear that there is no conflict at all between the idea of protected modules, non-command messages, and “functional relationships”.

There are many more important parts to think about and rethink — but this is already too detailed.

A good heuristic for my own thinking about our new not-quite-a-field is to not just “think systems” (avoiding lower level mechanisms), but to “think Biology”. The latter is tricky because not all the systems principles that can be and are used by Biology are within the current scales of computing. But, if you think about “cells as objects” then many important principles quickly come to mind. (And if we look around for the system that is most like this today, we find Erlang and its derivatives …)

My conclusion here is that in in the early stages of any field, it is not a good idea to get rigid and dogmatic, even religious, about “principles that are not strong enough to be principles”.

The Turing Award winner Tony Hoare had a great observation on us in general: “Debugging is harder than programming, so don’t use all your cleverness in writing the program” (this goes for design too!)

View 32 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025