Assuming you’re interested in reasons Dijkstra had to say such thing you have to consider his approach to programming in general. Dijkstra developed C.A.R. Hoare’s approach to formal program specification (with what we call “Hoare triples”) into formal Weakest precondition calculus. And wrote a book on how to design and develop correct-by-construction programs starting with formal specification (postconditions, deriving a program and weakest preconditions backwards from them). Btw the book is great and rather short. I recommend at least skimming through it.
So Dijkstra was interested in formall
Assuming you’re interested in reasons Dijkstra had to say such thing you have to consider his approach to programming in general. Dijkstra developed C.A.R. Hoare’s approach to formal program specification (with what we call “Hoare triples”) into formal Weakest precondition calculus. And wrote a book on how to design and develop correct-by-construction programs starting with formal specification (postconditions, deriving a program and weakest preconditions backwards from them). Btw the book is great and rather short. I recommend at least skimming through it.
So Dijkstra was interested in formally proven and correct-by-construction programs using the formalism of Hoare triples. But you cannot apply Hoare triples to object-oriented programs hence you cannot prove their correctness. Therefore, for Dijkstra, OOP was crap.
To those who heard about “contracts”, “design-by-contract” (c) Bertrand Mayer and Eiffel, I’ve used that, I was part of research on that and I can tell you it doesn’t work. First of all class invariants are unsound and work only in simple cases. You have the whole bag of works with “callbacks”, mutual invariants (my invariant depends on invariant of another class) and so on. The formalisms to describe this — things like “object ownership” — are so complicated that they are unusable in practice and still forbid many use cases. Then to check pre- and postconditions you have to consider object reference aliasing which is an open problem yet. And again a complicated one.
Besides, I’m troubled by the fact we don’t have a mathematical model for OOP. We have Turing machines for imperative (procedural) programming, lambda-calculus for functional programming and even pi-calculus (and CSP by C.A.R. Hoare again and other variations) for event-based and distributed programming, but nothing for OOP. So the question of “what is a ‘correct’ OO program?”, cannot even be defined; (much less, the answer to that question.)
Indeed OOP was (and still is) adopted mostly for UI and “business applications” where ultimate correctness isn’t necessary. From a business standpoint there is no need for a 100% correct answer 100% of the time. You just need a “good enough” answer, 90% of the time, to be able to earn more money than you lose. And time-to-market might be much more important than correctness and stability. And there’s nothing inherently bad with this approach. That’s basically how we all earn money and stay alive. In the fields where correctness is a must (like NASA, healthcare, high-frequency trading and other banking operations) they don’t use OOP.
As the last remark about correctness of OOP you can look at OO programs out there (on Github for example) and see for yourself that most of the time they violate even such a basic correctness principle as the Liskov substitution principle (in the sense that overriding methods should obey the contract of their ancestor, i.e. basically do the same thing plus, possibly, something more). (“see also”: Liskov substitution principle - Google Search … and [this is probably one of the first “search results”, there]: What is the Liskov Substitution Principle?)
When I heard this, I thought it was really funny for a number of reasons — Edsger and I were friendly (via Bob Barton) and he loved to come up with snide funny comments.
However, he certainly knew Nygaard and Dahl (of Simula fame) and that they were Norwegian.
He possibly knew that the earlier Sketchpad had been invented in Massachusetts.
He knew — via Barton — that I had my ideas while in Utah. Etc.
Perhaps more interesting is that he was the inventor of the Semaphore coordination mechanism (ca ‘62), and a semaphore is an object that is an instance of an idea about synchronization, each of which
When I heard this, I thought it was really funny for a number of reasons — Edsger and I were friendly (via Bob Barton) and he loved to come up with snide funny comments.
However, he certainly knew Nygaard and Dahl (of Simula fame) and that they were Norwegian.
He possibly knew that the earlier Sketchpad had been invented in Massachusetts.
He knew — via Barton — that I had my ideas while in Utah. Etc.
Perhaps more interesting is that he was the inventor of the Semaphore coordination mechanism (ca ‘62), and a semaphore is an object that is an instance of an idea about synchronization, each of which has a local protected variable and only two protected operations. An irony is this was a bit ugly to make “neatly” in the Algol of the day, and in Simula (the encapsulation could be violated). When added to a HLL, it was usually done so as a “feature” (a special kind of variable rather than to think through the larger consequences of being able to make many such useful things). (And Hoare’s “monitors” were also objects, etc.)
The other fun side-note is that McCarthy had invented “fluents” a bit earlier, a much more useful idea than semaphores and monitors for avoiding race conditions, and one that was not well enough understood at the time to catch on.
He did like to pull chains …

Edsger Dijkstra, a prominent computer scientist, expressed his skepticism about object-oriented programming (OOP) primarily due to his belief in the importance of strong mathematical foundations in programming and software engineering. His statement reflects a few key concerns:
- Complexity: Dijkstra viewed OOP as introducing unnecessary complexity into programming. He believed that the principles of OOP could lead to overly complicated designs and make programs harder to understand and maintain.
- Abstraction vs. Implementation: Dijkstra emphasized the need for clear distinctions between abstractio
Edsger Dijkstra, a prominent computer scientist, expressed his skepticism about object-oriented programming (OOP) primarily due to his belief in the importance of strong mathematical foundations in programming and software engineering. His statement reflects a few key concerns:
- Complexity: Dijkstra viewed OOP as introducing unnecessary complexity into programming. He believed that the principles of OOP could lead to overly complicated designs and make programs harder to understand and maintain.
- Abstraction vs. Implementation: Dijkstra emphasized the need for clear distinctions between abstraction and implementation. He argued that OOP often blurs these lines by tying data and behavior together in ways that can obscure the underlying logic of a program.
- Historical Context: His comment about California likely stems from the cultural and technological environment of Silicon Valley during the rise of OOP in the late 20th century. Dijkstra was critical of trends in programming that he saw as driven more by fashion and marketing than by sound engineering principles.
- Alternative Paradigms: Dijkstra was a proponent of structured programming and formal methods, which focus on clear, logical flow and rigorous proof of correctness in software. He believed these approaches were more effective than OOP for producing reliable software.
Overall, Dijkstra's critique of OOP was rooted in his broader philosophy about programming and software design, emphasizing clarity, simplicity, and mathematical rigor. His perspective has sparked ongoing debates in the software engineering community regarding the merits and drawbacks of different programming paradigms.
As others have noted, the provenance of the quote is dubious, but the views he expressed elsewhere suggest that it's not far off the mark.
If you consider "programming as a branch of formal mathematics and applied logic" (which, at its best, it is), object oriented programming genuinely has little to nothing to offer.
But that is because that's not the problem OOP is trying to solve. Object oriented programming leverages the fact that humans have millions of years of evolution invested in conceiving of the world in terms of things which have properties and associated methods of doing things with
As others have noted, the provenance of the quote is dubious, but the views he expressed elsewhere suggest that it's not far off the mark.
If you consider "programming as a branch of formal mathematics and applied logic" (which, at its best, it is), object oriented programming genuinely has little to nothing to offer.
But that is because that's not the problem OOP is trying to solve. Object oriented programming leverages the fact that humans have millions of years of evolution invested in conceiving of the world in terms of things which have properties and associated methods of doing things with them. A salt shaker has a property of the amount of salt in it, and can be shaken.
OOP isn’t an attempt to further programming as a branch of formal mathematics and applied logic - if anything, it obfuscates that view of programming. What it is is an attempt to give humans a way to think about programs using the cognitive firmware we all have in our heads for thinking about the world around us. It’s an adapter layer between how humans think and how computers need to be programmed.
As with anything that lets you pretend you’re doing one thing (manipulating objects) when you’re really doing something else (creating sequences of instructions that will be run by a computer), there are some impedance mismatches and it creates some problems.
Probably the biggest issue with OOP at this point in the history of computers is the fact that OOP programming encourages mutable state to be diffused all over your program. On modern hardware, that creates several problems:
- Parallelism is the way forward for increasing performance - clock speeds are not going up year over year the way they did years ago; the number of cores and concurrent threads is what is growing.
- As soon as you parallelize, all that mutable state in your OOP program requires careful management to avoid race conditions. A programming paradigm in which it is common not to know where all your state is is going to be difficult to parallelize unless you designed for it from day one, and if you designed from day one for that, the way you code to that has likely introduced a lot of complexity that offsets the simplicity OOP promises. So OOP naturally leads to a programming style that makes retrofitting parallelism onto a codebase costly and bug-prone, and demands senior-developer level skills to do it well - when the point was to empower non-senior-level people to write software.
- Access to memory is buffered through several layers of cache memory. With state diffused all over the place, when the program needs to access two pieces of data, the common case is that they are not colocated in memory and you incur a cache miss - where the CPU has to access the much slower main memory. This can make an enormous difference in performance.
- Mutable-by-default is dangerous - anything that can change at runtime multiplies the possible states the program can have, and therefore the number of potential bugs.
These are not fussy, esoteric problems - they’re a fundamental mismatch between what OOP programming naturally leads to and the way modern hardware works.
OOP has its place - GUI toolkits are the most obvious example. But if you’re working on a project and someone leading it is trying to do “pure OOP”, run away. Unless the project fits into an extremely narrow band of things OOP is an exact fit for, you’re in the presence of a fetish, not an engineering principle. It’s great for learning programming and empowering neophytes, but sooner or later it becomes a ghetto, and if you want to grow, you need to move beyond it and treat it as a tool in the toolbox and nothing more.
This same Dijkstra certainly did write the, now famous, letter "Go To Statement Considered Harmful" (Communications of the ACM. 11 (3): 147–148). His main objection to Go To statements was that their use hinders reasoning about programs. In a way, the mechanisms offered by object-oriented programming languages can be even worse than Go To statements. Let me explain.
A (conventional) Go To statement transfers control to another location in the program. The target location is statically determined, that is, it is known at design time. But the target location can be (almost) anywhere in the progra
This same Dijkstra certainly did write the, now famous, letter "Go To Statement Considered Harmful" (Communications of the ACM. 11 (3): 147–148). His main objection to Go To statements was that their use hinders reasoning about programs. In a way, the mechanisms offered by object-oriented programming languages can be even worse than Go To statements. Let me explain.
A (conventional) Go To statement transfers control to another location in the program. The target location is statically determined, that is, it is known at design time. But the target location can be (almost) anywhere in the program, resulting in what is sometimes called spaghetti code. Structured programming (promoted by Dijkstra) can be viewed as a way of limiting control transfer, in order to help the reasoning process.
What about object-oriented (OO) programming? First, we define a class C with method f(), and an object c of type C. It is clear what code gets executed when c.f() is called. Now, define a class D that inherits from C and overrides f(), that is, associates different implementation code to that function name. Next, we introduce a (polymorphic) variable v of type C. What code gets executed when v.f() is called? This is not statically determined, but depends on the run-time type of v, that is, on the type of the object that v holds (refers to). It could be either C.f() or D.f(). In some OO programming languages (e.g. Java), inheritance and overriding can even be done anonymously on-the-fly., thereby worsening the situation.
It is in this sense that object-oriented programs, when making unbridled use of OO mechanisms (indicated in italics in the preceding paragraph), can give rise to dynamic spaghetti, which is worse than the static spaghetti made possible by Go To statements. This impedes reasoning even more. When you have ever had to deal with such OO code, then you know what I mean.
Design patterns can be viewed as standardized ways to limit the arbitrary use of OO mechanisms, with the aim to improve reasoning about OO programs. Instead of ‘improved reasoning’, the benefits of design patterns are nowadays advertised as improved software qualities, such as understandability, verifiability, and modifiability.
But design patterns are a weak substitute for (elegant and usable) mathematical axiomatization, which we do have for structured programming.
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix.
Overpaying on car insurance
You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance.
If you’ve been with the same insurer for years, chances are you are one of them.
Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving.
That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try.
Consistently being in debt
If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%.
Here’s how to see if you qualify:
Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify.
It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years.
Missing out on free money to invest
It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach.
Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus.
Pretty sweet deal right? Here is a link to some of the best options.
Having bad credit
A low credit score can come back to bite you in so many ways in the future.
From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it.
Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line.
How to get started
Hope this helps! Here are the links to get started:
Have a separate savings account
Stop overpaying for car insurance
Finally get out of debt
Start investing with a free bonus
Fix your credit
Preface
Attempting to guess at anybody’s thoughts on a subject is difficult unless we have a direct record of their thoughts on that subject (and given how people’s thinking changes over time, it can be difficult even then). It’s almost unavoidable that this answer probably reflects my own thoughts and feelings to at least some degree, not just Dijkstra’s own. I’ve done my best to simply set a note of context, then quote Dijkstra, but it’s possible that by setting a different context, I’ve mis-characterized what he actually believed.
So, I’ve tried to be as accurate as I can, but it’s imperfect
Preface
Attempting to guess at anybody’s thoughts on a subject is difficult unless we have a direct record of their thoughts on that subject (and given how people’s thinking changes over time, it can be difficult even then). It’s almost unavoidable that this answer probably reflects my own thoughts and feelings to at least some degree, not just Dijkstra’s own. I’ve done my best to simply set a note of context, then quote Dijkstra, but it’s possible that by setting a different context, I’ve mis-characterized what he actually believed.
So, I’ve tried to be as accurate as I can, but it’s imperfect at best.
[Side note: the original paper is an image of a type-written page, so I’ve re-typed his statements. Please excuse any typos.]
Discussion
I seriously doubt that he did say it.
I suspect he probably felt that object oriented programming wasn’t exactly wrong, but that it was largely superficial (or at least that it was taught superficially).
Probably the single biggest tenet of object oriented programming is separation of the interface from the implementation. The “Three Pillars” are basically just language features necessary to support that. Especially early in the history of object orientation this was largely taught (and, I think, seen) as a worthy goal in itself (and, many still see it that way).
To get an idea of Dijkstra’s view, let us consider the opening paragraph of EWD 562 (and though I won’t reference it directly each time, the remaining quotes in this answer are from EWD 562 as well):
We all know that when we have to design something “large” or “difficult”, we have to apply in one way or another the old adagium “Divide and Rule”. Our machines are made from components, or programs are made from modules and they should fit together via interfaces. That is fine, but it raises, of course, the questions how to choose the modules and how to formulate the interfaces. This paper explores the main goals of modularization; being aware of them should assist us in evaluating the quality of proposed modularization.
He went on to talk about another thing seen as a worthy goal by many advocates of object oriented programming: “Separation of concerns”.
The usual catchphrase for this techniques is “separation of concerns”. Although very adequate from a descriptive point of view, it raises of course the same sort of question as we raised initially about modules and interfaces, such as “which concerns should be separated?” and perhaps “After separation, how do we combine again?”. This similar is a rather clear hint that the successful “modularization” of a large information processing system is not a trivial matter.
So, while being able to separate the interface from the implementation is important, in his view it was at least equally important that the result not be a merely syntactic separation, but that the semantics of the interface be defined as well. For example, it’s not enough to know that we have an interface foo that supports operations bar and baz. We need to know the actual properties of foo for it to be useful:
[…]in the invention of the complex composite systems were are considering, a rigorous definition of the essential properties of the parts is not a luxury but a necessity. In the final part of my talk I would like to tackle the more elusive question “By virtue of what type of properties can parts be nice or ugly?” It is the question what interfaces to invent. I called this question “elusive” because it is as impossible to give a final answer to it [as it] is impossible to teach young mathematicians how to discover beautiful theorems. What we can do—and, in fact, do while teaching mathematics—is explaining why we think some theorems are beautiful. In a similar vein we should be able to explain to young computer scientists what virtues to look for when they are evaluating or considering a proposed set of interfaces. It is in this connection a good thing to remember that one of the main roles of the decomposition of a whole into parts was the localization of the bug when something went wrong.
Although this paper isn’t specifically about object oriented programming, it describes Dijkstra’s thoughts about how to solve many (most?) of the problems OOP claims to solve. So, it gives a fair amount of insight into his thinking on at least the general subject.
For anybody who cares about this general subject area, I think this paper is well worth reading.
Summary
Much of what object oriented programming advocates is useful but superficial. It provides useful mechanisms, but its advocates often tend to treat the superficial as goals in themselves. It is not enough to merely separate interface from implementation—we must work much harder at learning (and teaching) the qualities of good interfaces. Syntax is at most a small part of that—to be useful, the semantics of an interface must be well chosen and tightly defined.
Reference
I'll address the question of whether Dijkstra really did say this. It does seem doubtful that he WROTE it. Did he SAY it. According to Bob Crawford, he did.
The journal TUG LINES was produced by the Turbo user's group. Issue 32 (August September 1989) had the first of a series of articles by Bob Crawford about Object Oriented Programming. I quote:
Lat year I asked Edsger W. Dijkstra, winner of every award computer science has to offer (including the Turing award), his opinion of OOP. He replied that "object oriented programs are offered as alternatives to correct ones" and that "object oriented
I'll address the question of whether Dijkstra really did say this. It does seem doubtful that he WROTE it. Did he SAY it. According to Bob Crawford, he did.
The journal TUG LINES was produced by the Turbo user's group. Issue 32 (August September 1989) had the first of a series of articles by Bob Crawford about Object Oriented Programming. I quote:
Lat year I asked Edsger W. Dijkstra, winner of every award computer science has to offer (including the Turing award), his opinion of OOP. He replied that "object oriented programs are offered as alternatives to correct ones" and that "object oriented programming is an exceptionally bad idea which could only have originated in California."
Crawford went on to write that OOP originated with Smalltalk, which we all know is false.
Other answers have suggested that Dijkstra would surely have known that OOP originated with Simula67 in Norway and not with Smalltalk in California. Yes, probably. But in Smalltalk everything is an object and I doubt if this is true of Simula67. So I suspect that Dijkstra objected, not to using objects where appropriate (for example when simulating interacting agents), but to forcing programmers to use objects for everything.
Becoming an actuary isn't as hard as some people might think. For those who have a mind programmed for math and data, being an actuary is a no brainer. Math, while complex, is simple – you follow the formula and you get a solution. Becoming an actuary is the same. The Society of Actuaries’ free affiliate membership helps you follow the formula to get where you want to go—on your schedule.
No matter when you start your journey, there’s a path for every person. Getting a head start while you’re still in school is a great way to achieve your goal faster. But, entering the pathway while you’re work
Becoming an actuary isn't as hard as some people might think. For those who have a mind programmed for math and data, being an actuary is a no brainer. Math, while complex, is simple – you follow the formula and you get a solution. Becoming an actuary is the same. The Society of Actuaries’ free affiliate membership helps you follow the formula to get where you want to go—on your schedule.
No matter when you start your journey, there’s a path for every person. Getting a head start while you’re still in school is a great way to achieve your goal faster. But, entering the pathway while you’re working often means that your employer will help pay for the costs of actuarial exams and provide you with time to study.
Start your journey with the SOA and earn credentials that are recognizable internationally. You won’t regret it.
As rule mathematicians can be strange enough. There are many mathematicians exist but very few of them can be physicist. The reason is a most part of brain is buzy by thinking abstract and there is no room for subject thinking. As rule it leads to good ability of formalized task solving and a bad ability to formalize real world tasks to make a model which can be described with math as a language. They do that but rather badly. And they do not understand that. It is specialy funny to read the mathematicians which try to be phylosopher. There are a lot of way to violate the own abstraction borde
As rule mathematicians can be strange enough. There are many mathematicians exist but very few of them can be physicist. The reason is a most part of brain is buzy by thinking abstract and there is no room for subject thinking. As rule it leads to good ability of formalized task solving and a bad ability to formalize real world tasks to make a model which can be described with math as a language. They do that but rather badly. And they do not understand that. It is specialy funny to read the mathematicians which try to be phylosopher. There are a lot of way to violate the own abstraction border for them. The simplest one is a god paradox where a brain try to vin over itself and the same a moment try to remain to avoid be defeated. That is not wonderful when mathematician tries compare “soft” to “hot”. The one consider that is normal way of thinking because this a person compares numeric values…
When I hear or read about math theory of OOP I never understand about what it goes. OOP is a way of the real worlds object description. Right in a moment when we will have a theory on that, we will have the theory of the AI not more and not less. The interface of implementation separation is a short part of OOP and it can be performed in pure (pure means without OOP, because OOP not deny the functional way at all ) functional programming. Any function provides interface by its signature (parametres description) and return value. And its implementation is hidden of caller. So the main part of OOP is an Object Oriented design. And there are no general way of the such a design to be done. May be the very simple cases only. What is wonderfull there? And nobody knows when the most part of mathematicians will differ the already formalized task solution from the formalization process.
The special topic are technical requirments which are made by mathematicians. As rule they do not like to discuss this. You may hear something like: -“That is not so important and it is not worth to spend much time. ”. And that is because they’re not able to do that well. They are exelent solver already formalized tasks and math is a very beatiful and important thing… but everyone should do the one’s job.
If you look at how most popular languages implement object-oriented programming, it's pretty easy to see why he might have said that. I present to you, EnterpriseQualityCoding/FizzBuzzEnterpriseEdition, a "Ha Ha Only Serious" take on "enterprise-grade" Java code.
Now, I haven't been able to track the exact source of the quote you mention, but I found some other quotes:
In some ways programs are among the most complicated artefacts mankind ever trued to design, and personally I find it fascinating to see that reasoning about them is so much aided by simple, elegant devices such as predicate calcu
If you look at how most popular languages implement object-oriented programming, it's pretty easy to see why he might have said that. I present to you, EnterpriseQualityCoding/FizzBuzzEnterpriseEdition, a "Ha Ha Only Serious" take on "enterprise-grade" Java code.
Now, I haven't been able to track the exact source of the quote you mention, but I found some other quotes:
In some ways programs are among the most complicated artefacts mankind ever trued to design, and personally I find it fascinating to see that reasoning about them is so much aided by simple, elegant devices such as predicate calculus and lattice theory. After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure; in this connection I gratefully quote from The Concise Oxford Dictionary a definition of "elegant", viz. "ingeniously simple and effective". Amen. (For those who have wondered: I don't think object-oriented programming is a structuring paradigm that meets my standards of elegance.)
...and...
In the case of Computing Science I am referring to the pressures from a society that is structurally unable to include in its vision a view of programming as a branch of formal mathematics and applied logic, and that therefore is forced to ask for snakeoil. If, then, the campus is insufficiently shielded from those pressures and the market forces are given too free a reign, you can draw your conclusion: a flourishing snakeoil business, be it in "programming by example", "object-oriented programming", "natural language programming", "automatic program generation", "expert systems", "specification animation", or "computer-supported co-operative work".
Now, today we have things like mixins, traits, metaclasses, multimethods (in CLOS), even solutions to the Circle-ellipse problem (again CLOS). And OOP is falling back a bit in the "snake oil" race (but just a tiny bit, and just because it has to compete with many more snake oils today than it did in the 80's). But, sadly, even in 2016 the languages that "do OOP right" are very few in number, and with very few exceptions (like Python and Ruby), are generally unpopular. If so many people get it wrong, then there has to be something wrong with the idea itself (I think UX designers will agree; and, ultimately, high-level programming langues are a UX problem - how to communicate between human and machine).
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily,
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix.
Don’t waste your time browsing insurance sites for a better deal. A company called Insurify shows you all your options at once — people who do this save up to $996 per year.
If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you.
Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify and see how much you could save on car insurance.
2. Ask This Company to Get a Big Chunk of Your Debt Forgiven
A company called National Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit.
If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum.
On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of.
3. You Can Become a Real Estate Investor for as Little as $10
Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10.
An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting.
With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers.
Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties.
So if you want to get started in the world of real-estate investing, it takes just a few minutes to sign up and create an account with the Fundrise Flagship Fund.
This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing.
4. Earn Up to $50 this Month By Answering Survey Questions About the News — It’s Anonymous
The news is a heated subject these days. It’s hard not to have an opinion on it.
Good news: A website called YouGov will pay you up to $50 or more this month just to answer survey questions about politics, the economy, and other hot news topics.
Plus, it’s totally anonymous, so no one will judge you for that hot take.
When you take a quick survey (some are less than three minutes), you’ll earn points you can exchange for up to $50 in cash or gift cards to places like Walmart and Amazon. Plus, Penny Hoarder readers will get an extra 500 points for registering and another 1,000 points after completing their first survey.
It takes just a few minutes to sign up and take your first survey, and you’ll receive your points immediately.
5. This Online Bank Account Pays 10x More Interest Than Your Traditional Bank
If you bank at a traditional brick-and-mortar bank, your money probably isn’t growing much (c’mon, 0.40% is basically nothing).1
But there’s good news: With SoFi Checking and Savings (member FDIC), you stand to gain up to a hefty 3.80% APY on savings when you set up a direct deposit or have $5,000 or more in Qualifying Deposits and 0.50% APY on checking balances2 — savings APY is 10 times more than the national average.1
Right now, a direct deposit of at least $1K not only sets you up for higher returns but also brings you closer to earning up to a $300 welcome bonus (terms apply).3
You can easily deposit checks via your phone’s camera, transfer funds, and get customer service via chat or phone call. There are no account fees, no monthly fees and no overdraft fees.* And your money is FDIC insured (up to $3M of additional FDIC insurance through the SoFi Insured Deposit Program).4
It’s quick and easy to open an account with SoFi Checking and Savings (member FDIC) and watch your money grow faster than ever.
Read Disclaimer
5. Stop Paying Your Credit Card Company
If you have credit card debt, you know. The anxiety, the interest rates, the fear you’re never going to escape… but a website called AmONE wants to help.
If you owe your credit card companies $100,000 or less, AmONE will match you with a low-interest loan you can use to pay off every single one of your balances.
The benefit? You’ll be left with one bill to pay each month. And because personal loans have lower interest rates (AmONE rates start at 6.40% APR), you’ll get out of debt that much faster.
It takes less than a minute and just 10 questions to see what loans you qualify for.
6. Earn Up to $225 This Month Playing Games on Your Phone
Ever wish you could get paid just for messing around with your phone? Guess what? You totally can.
Swagbucks will pay you up to $225 a month just for installing and playing games on your phone. That’s it. Just download the app, pick the games you like, and get to playing. Don’t worry; they’ll give you plenty of games to choose from every day so you won’t get bored, and the more you play, the more you can earn.
This might sound too good to be true, but it’s already paid its users more than $429 million. You won’t get rich playing games on Swagbucks, but you could earn enough for a few grocery trips or pay a few bills every month. Not too shabby, right?
Ready to get paid while you play? Download and install the Swagbucks app today, and see how much you can earn!
I want to chime in with those here supporting OOP. In the days of Dijkstra, the academic folks liked to talk about proving programs correct. While a nice idealistic idea, that kind of rigor was only common among those with degrees in mathematics, and in practice in the world of software development, it almost never took place, and if it had been common, the pace of software development would have been vastly slower. OOP was and is the practical solution because it allows dividing programming problems into manageable parts.
While I agree with others that issues like caching and parallelism are s
I want to chime in with those here supporting OOP. In the days of Dijkstra, the academic folks liked to talk about proving programs correct. While a nice idealistic idea, that kind of rigor was only common among those with degrees in mathematics, and in practice in the world of software development, it almost never took place, and if it had been common, the pace of software development would have been vastly slower. OOP was and is the practical solution because it allows dividing programming problems into manageable parts.
While I agree with others that issues like caching and parallelism are still very challenging, OOP does not make them worse. A solution for parallel access to the variables of an object can be embedded inside the class it belongs to and protect other code from the problem. Same with caching.
Although Dijkstra did favor functional programming against OOP
, I think it is worth mentioning that at the very early beginning (EWD132, unpublished and year not determined ), he actually liked Tony Hoare’s idea of record class, which greatly influenced the design of Simula 67. Notice that contrary to Smalltalk’s message passing, Simula’s (record) class approach is much more similar to modern OOP languages like C++ and Java.I agree that with encapsulation, polymorphism and other bells and whistles, modern OOP is much richer and perplexing, but still I think it’s fair to say that Dijkstra us
Footnotes
Although Dijkstra did favor functional programming against OOP
, I think it is worth mentioning that at the very early beginning (EWD132, unpublished and year not determined ), he actually liked Tony Hoare’s idea of record class, which greatly influenced the design of Simula 67. Notice that contrary to Smalltalk’s message passing, Simula’s (record) class approach is much more similar to modern OOP languages like C++ and Java.I agree that with encapsulation, polymorphism and other bells and whistles, modern OOP is much richer and perplexing, but still I think it’s fair to say that Dijkstra used to be a big fan of OOP.
I guess people just change, probably because tools and use cases change. Personally I like the elegance of FP far more than OOP. Bu at the end of day, if I have to code a robust game or website, I would not exclude C++/Java from my list.
Footnotes
I wonder if he really said that. I find lots of people quoting him, but none giving the source. Wikipedia lists it as "unsourced" Talk:Edsger W. Dijkstra
Searching through the archive of his notes I find one note calling OOP "snake oil", among a number of other things that he disapproves of ("programming by example", "natural language programming").
Really the only relevant quote is (Achievements and Challenges (EWD1284) ):
In some ways programs are among the most complicated artefacts mankind ever trued to design, and personally I find it fascinating to see that reasoning about them is so much
I wonder if he really said that. I find lots of people quoting him, but none giving the source. Wikipedia lists it as "unsourced" Talk:Edsger W. Dijkstra
Searching through the archive of his notes I find one note calling OOP "snake oil", among a number of other things that he disapproves of ("programming by example", "natural language programming").
Really the only relevant quote is (Achievements and Challenges (EWD1284) ):
In some ways programs are among the most complicated artefacts mankind ever trued to design, and personally I find it fascinating to see that reasoning about them is so much aided by simple, elegant devices such as predicate calculus and lattice theory. After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure; in this connection I gratefully quote from The Concise Oxford Dictionary a definition of "elegant", viz. "ingeniously simple and effective". Amen. (For those who have wondered: I don't think object-oriented programming is a structuring paradigm that meets my standards of elegance.)
I am a huge supporter of Object Oriented programming as it has substantial advantages over other programming styles.
While Dijkstra's contribution to Computer science is enormous, he was also famous for for quoting things that are a bit difficult to digest.
Dijkstra was the one who first coined the phrase structured programming.
Though there are so many advantages of Object oriented programming, we must admit there are some pitfalls too.
Structured programming beats Object oriented programming in terms of speed, size and effort.
Since the statement was given by Dijkstra a long time ago, I bel
I am a huge supporter of Object Oriented programming as it has substantial advantages over other programming styles.
While Dijkstra's contribution to Computer science is enormous, he was also famous for for quoting things that are a bit difficult to digest.
Dijkstra was the one who first coined the phrase structured programming.
Though there are so many advantages of Object oriented programming, we must admit there are some pitfalls too.
Structured programming beats Object oriented programming in terms of speed, size and effort.
Since the statement was given by Dijkstra a long time ago, I believe these were some considerable factors.
Considering his contribution to Computer Science, I think we must respect his statement no matter what we think about object oriented programming.
Regardless of Dijkstra’s genius, whether he actually said that. and whether or not he was really prejudiced against Californians, there is nothing inherent in OO programming that is “exceptionally bad”.
It is a useful tool to have in the toolbox. But there are two problems that it has led to in the real world:
- The widespread, semi-religious view that OO is the one and only way to do any programming whatsoever. Even this would not be so bad if those of us that disagree were not then vilified and made out to be incompetent old fogeys, stuck in the past.
- It has been implemented diabolically bady in
Regardless of Dijkstra’s genius, whether he actually said that. and whether or not he was really prejudiced against Californians, there is nothing inherent in OO programming that is “exceptionally bad”.
It is a useful tool to have in the toolbox. But there are two problems that it has led to in the real world:
- The widespread, semi-religious view that OO is the one and only way to do any programming whatsoever. Even this would not be so bad if those of us that disagree were not then vilified and made out to be incompetent old fogeys, stuck in the past.
- It has been implemented diabolically bady in the languages that have come to dominate present day programming [Java, C++].
I haven't seen a better explanation about OOP till date than the one given by a guy who never had any formal engineering training, but always had clear idea about everything he did and preached, be it technology, design or art.
Here, in an excerpt from a 1994 Rolling Stone interview, Jobs explains what object-oriented programming is.
Jeff Goodell: Would you explain, in simple terms, exactly what object-oriented software is?
Steve Jobs: Objects are like people. They’re living, breathing things that have knowledge inside them about how to do things and have memory inside them so they can remember t
I haven't seen a better explanation about OOP till date than the one given by a guy who never had any formal engineering training, but always had clear idea about everything he did and preached, be it technology, design or art.
Here, in an excerpt from a 1994 Rolling Stone interview, Jobs explains what object-oriented programming is.
Jeff Goodell: Would you explain, in simple terms, exactly what object-oriented software is?
Steve Jobs: Objects are like people. They’re living, breathing things that have knowledge inside them about how to do things and have memory inside them so they can remember things. And rather than interacting with them at a very low level, you interact with them at a very high level of abstraction, like we’re doing right here.
Here’s an example: If I’m your laundry object, you can give me your dirty clothes and send me a message that says, “Can you get my clothes laundered, please.” I happen to know where the best laundry place in San Francisco is. And I speak English, and I have dollars in my pockets. So I go out and hail a taxicab and tell the driver to take me to this place in San Francisco. I go get your clothes laundered, I jump back in the cab, I get back here. I give you your clean clothes and say, “Here are your clean clothes.”
You have no idea how I did that. You have no knowledge of the laundry place. Maybe you speak French, and you can’t even hail a taxi. You can’t pay for one, you don’t have dollars in your pocket. Yet I knew how to do all of that. And you didn’t have to know any of it. All that complexity was hidden inside of me, and we were able to interact at a very high level of abstraction. That’s what objects are. They encapsulate complexity, and the interfaces to that complexity are high level.
He didn't. He mentioned OOP only two or three times. He was not particularly fond of it, but he never wrote anything like that. The worst thing he wrote, specifically about OOP, was that it was not 'elegant'. Which is very far from meaning that it's an exceptionally bad idea.
OOP is nothing more than a syntactic sugar for imperative languages. IN contrast with Functional programming that is really based on mathematics. (abstract algebra, set and category group theory).
Did hie really say that? Ideas for Object Oriented languages started with Simula 67 (Simula - Wikipedia) which was developed in Oslo, Norway. Pretty far from California.
Alan Kay, who coined the term “object oriented”, was influenced by Simula (among other things) when he developed SmallTalk with his team at XEROX Parc.
I’ll start by saying that I don’t clearly understand the English as simply stated.
However, three things might help us (a) English was not Edsger’s native language (b) he was very proud of his skills in English (c) he delighted in coming up with snide remarks (one of his good friends was Bob Barton, who also liked to do this).
I read this as “programmers will never approach professional status” — and — “the programming occupation will never become extinct (too bad)”!
He desired the process of getting computers to “do what we mean!” to be rather like the mathematics he was used to when getting his
I’ll start by saying that I don’t clearly understand the English as simply stated.
However, three things might help us (a) English was not Edsger’s native language (b) he was very proud of his skills in English (c) he delighted in coming up with snide remarks (one of his good friends was Bob Barton, who also liked to do this).
I read this as “programmers will never approach professional status” — and — “the programming occupation will never become extinct (too bad)”!
He desired the process of getting computers to “do what we mean!” to be rather like the mathematics he was used to when getting his PhD in mathematical physics. In his own words:
After having programmed for some three years, I had a discussion with A. van Wijngaarden, who was then my boss at the Mathematical Center in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden simultaneously, and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, and to become....., yes what? A programmer? But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline? I remember quite vividly how I envied my hardware colleagues, who, when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that, when faced with that question, I would stand empty-handed.
He had “bitten the apple” and gotten hooked. And the dual attractions and repulsions of math on the one side and programming on the other stayed with him his whole life.
It was his pristineness towards the former that got him to disparage the latter — even though that was the path he also had taken …
I thought Simula was invented in Norway. But, then I just checked on OO history on Wikipedia, seems it was more MIT in the late 50’s.
Would he have been the kind of guy to check his facts?
But, he is from Holland?
From such a comment, I would have thought it was someone from New York.
But, it is more likely that an MIT professor would say this. In fact, it more than highly likely.
Dijkstra wasn’t visiting MIT at the time he said this, or was he?
I imagine if he said this at MIT that his whole audience would have agreed with him.
Developers hating on stuff is almost universally a useless observation. Any programming concept, paradigm, language and what have you has those “haters”, who tend to be vocal but rarely reveal anything significant. Many developers hate pointers, many developers hate Python or JavaScript or Java or whatever, many developers hate hard tabs, many developers hate this or that or the other thing.
When you step away from the outpouring of emotion and see how software is actually developed around the globe, in industry and for research, in applications large and small, in embedded systems and server f
Developers hating on stuff is almost universally a useless observation. Any programming concept, paradigm, language and what have you has those “haters”, who tend to be vocal but rarely reveal anything significant. Many developers hate pointers, many developers hate Python or JavaScript or Java or whatever, many developers hate hard tabs, many developers hate this or that or the other thing.
When you step away from the outpouring of emotion and see how software is actually developed around the globe, in industry and for research, in applications large and small, in embedded systems and server farms, with emphasis on usability or security or speed or readability or some combination thereof, by individuals for their own use and by thousands in collaboration – you find that various paradigms are used for various purposes, with more or less success and ease and fun, and the “hate” is largely a waste of time.
Of course, over years and decades, things shift about. Fashions come and go, languages come and go. Programming is a human endeavor, and this is how human endeavors work. Programming nowadays is practiced by many more people than, say, metalworking, so there’s more variety of opinion and more audible bitching among programmers than among blacksmiths. Many years ago it was the other way around.
Object-Oriented Programming is a fundamental concept of modern software development. In different contexts it may be highly suitable, optional, or irrelevant. That’s all.
In the beginning there was Bob. Bob wrote code in a “Stream of Consciousness” way (SOC). He would dump out code as he was thinking it and it was good… most of the time. There would be a bug and he would fix it.
Bob could handle 2 thousand line programs (2k LOC) with ease. With enough debugging time he could handle 4kLOC. But management saw the need for more complex apps.
Now there were two programmers. The problem was that Bob wrote SOC code, from Bob’s consciousness. The other programmer couldn’t figure out what the hell was going on.
Bob would write 1000 LOC, call a function which would eventua
In the beginning there was Bob. Bob wrote code in a “Stream of Consciousness” way (SOC). He would dump out code as he was thinking it and it was good… most of the time. There would be a bug and he would fix it.
Bob could handle 2 thousand line programs (2k LOC) with ease. With enough debugging time he could handle 4kLOC. But management saw the need for more complex apps.
Now there were two programmers. The problem was that Bob wrote SOC code, from Bob’s consciousness. The other programmer couldn’t figure out what the hell was going on.
Bob would write 1000 LOC, call a function which would eventually become 1000 lines as well. And so on. Pretty soon the developers were hitting 10KLOC but they couldn’t add another feature without a rewrite. Management was pissed.
This happened all over the country. It limited the size and complexity of applications running on computers. The hardware guys in the meantime doubled the ability of the computers every couple of years or so. Management was pissed.
Until someone said “Hey let’s break down this massive blob of code into a bunch of small, well defined functions.” Functional Decomposition was born. Func-D (cool name huh?) became the norm. When it was followed, the functions were small (single-purpose?), they were well-defined (had clear prrequisites and clear postconditions) and a clear name that was easy to figure out.
Developers went nuts. With Funky-D applications grew an order of magnitude. 100KLOC programs were in reach, team sizes grew and the complexity of the applications grew as well.
And there’s Bob. Bob wrote tons of smaller functions sure. But when we needed a new function, he just wrote it. There were a 1000 other functions so he didn’t check if the one he just wrote already existed. Bob caused weird bugs, e.g. a code fix in one function caused the bug to go away in some parts of the application but not all parts.
Until someone said “Hey let’s organize these functions into groups..” Modules were born. The idea is that you place a function within a module (aka namespace). Put functions that “belong together” into the same module. It organizes them and puts a nice name around it.
Some languages even implemented scope. Some functions could be called from outside the Module (public) while others could only be called from within it (private).
Developers went crazy. Complexity and the size of applications went up again. The damn hardware guys kept pushing the edge… and you know it, management was still pissed.
Bob worked on that. His programs had four modules: Main, Feature1, Feature2 and Generic. Main was huge. Feature1 and Feature2 were smaller but they had 10’s of exactly the same functions that Bob cut-n-paste into multiple modules. And then the dumping ground was Generic which was humongous. It had tons of functions again.
Someone said, “Let’s make a Module able to inherit from another Module” and some other language capabilities and thus OO was born.
Love it! Complexity and size grew again.
But Bob now had 4 classes Main, Feature1, Feature2 and Generic. Feature1 and Feature2 inherited from Generic. So now Feature1 and Feature2 were cleaner, but Generic was a dumping ground filled with crap.
Then someone came along and said “OO sucks! It’s the worst”.
Thanks Bob.
The problem with OO is that it is exactly the opposite of failure: it was immensely succesful (in contrast to the actual benefits it provides).
In the dark days of OO's height of success it was treated almost like a religion by both language designers and users.
People thought that the combination of inclusion polymorphism, inheritance, and data hiding was such a magical thing that it would solve fundamental problems in the "software crisis".
Crazier, people thought that these features would finally give us true reuse that would allow us to write everything only once, and then we'd never have to
The problem with OO is that it is exactly the opposite of failure: it was immensely succesful (in contrast to the actual benefits it provides).
In the dark days of OO's height of success it was treated almost like a religion by both language designers and users.
People thought that the combination of inclusion polymorphism, inheritance, and data hiding was such a magical thing that it would solve fundamental problems in the "software crisis".
Crazier, people thought that these features would finally give us true reuse that would allow us to write everything only once, and then we'd never have to touch that code again. Today we know that "reuse" looks more like github than creating a subclass :)
There are many signs of that religion being in decline, but we're far away from it being over, with many schools and textbooks still teaching it as the natural way to go, the amount of programmers that learned to program this way, and more importantly, the amount of code and languages out there that follow its style.
Let me try and make this post more exciting and say something controversial: I feel that the religious adherence to OO is one of the most harmful things that has ever happened to computer science.
It is responsible for two huge problems (which are even worse when used in combination): over engineering and what I'll call "state oriented programming".
1) Over Engineering
What makes OO a tool that so easily leads to over engineering?
It is exactly those magical features mentioned above that are responsible, in particular the desire to write code once and then never touch it again. OO gives us an endless source of possible abstractions that we can add to existing code, for example:
- Wrapping: an interface is never perfect for every use, and providing a better interface is an enticing way to make code better. For example, a lot of classes out there are nothing more than a wrapper around the language's list/vector type, but are called a "Manager", "System", "Factory" etc. They duplicate most functionality (add/remove) while hiding others, making it specific to what type of objects are being managed. This seems good because it simplifies the interface.
- De-Hard-Coding: to enable the "write once" mentality, a class better be ready for every future use, meaning anything in both its interface and implementation that a future user might want to do differently should be accommodated for, by pulling things out into additional classes, interfaces, callbacks, factories.
- Objectifying: every single piece of data that can be touched by code must become an object. Can't have naked numbers or strings. Besides, naming these new classes creates meaning which seems like it makes them easier to deal with.
- Hiding & Modularizing: There is an inherent complexity in the dependency graph of each program in terms of its functionality. Ideally, modularizing code is a clustering algorithm over this graph, where the most sparse connections between clusters become module boundaries. In practice, the module boundaries are often in the wrong spot, produce additional dependencies themselves, but worst of all: they become less ideal over time as dependencies change. And since interfaces are even harder to change than implementation, they just stay put and deteriorate.
You can iteratively apply the above operations, and in most cases thus produce code of arbitrary complexity. Worse, because all code appears to be doing something and has a clear name and function, this extra complexity is often invisible. And programmers love creating it because it feels good to create what looks like the perfect abstraction for something, and to "clean up" whatever ugly interfaces it sits on top of some other programmer made.
Underneath all of this lies the fallacy of thinking that you can predict the future needs of your code, a promise that was popularized by OO, and has yet to die out.
Alternative ways of dealing with "the future", such as YAGNI, OAOO, and "Do the simplest thing that could possibly work" are simply not as attractive to programmers, since constant refactoring is hard, much like perfect clustering over time (for abstractions and modules) is hard. These are things that computers do well, but humans do not, since they are very "brute force" in their nature: they require "processing" the entire code base for maximum effectiveness.
Another fallacy this produces is that when over engineering inevitably causes problems (because, future), that those problems were caused by bad design up front, and next time, we're going to design even better (rather than less, or at least not for things you don't know yet).
2) "State Oriented Programming"
There's a famous quote: "There are only two hard things in Computer Science: cache invalidation and naming things". This is about the first of those two.
First a thought exercise: take a complex program that reads a file, like a compiler, or maybe a word-processor. By definition (before any further user input), all objects in that programs memory are derived from the contents of its input file. The reason why those objects exist at all is because if the program needs some kind of information (like the compiler asking "what is the type of variable a?"), for speed reasons, it needs to have cached this information. In theory, we could write a functions, such that every time any information was needed, it would re-scan the input file, and never store any objects at all. Slow, but possible, even for interactive software.
As programmers, we choose how much to cache, all the way from nothing to redundant amounts.
The problem with caching is the invalidation part: as data is derived from other data (which in turn is derived from other data, etc.), if the original data changes, so should all the derived data. Almost no programming languages do this for you automatically, meaning the programmer is in charge or writing code that detects when data has changed, how to recompute it, and to make sure all parts of the program "hear" about it, which is very error prone. Programs that would re-scan the source data every time are less buggy (don't ever get into an inconsistent state), and are easier for a programmer to change (dependencies are more explicit).
Ever wonder why so many problems with computers can be fixed simply by restarting/rebooting/re-running/re-generating/re-compiling? Welcome to cache invalidation.
This is a problem with almost every language, but OO exacerbates it by promoting copies of data (because, data hiding) and generally making it harder to track and update these data dependencies (because, many layers). Beyond that, thinking of your code as defined by "objects" makes you more likely to want to store everything, even in cases where data is transient, and could be defined much better as a sequence of transformations. You cannot add a field to an object just for a short amount of time, once you add it, you are now responsible for it to be valid for the lifetime of the object.
Is this some thin-veiled way of trying to promote everyone to rewrite their code base in Haskell or another pure functional language? No. First, Haskell doesn't even solve all these problems. But being aware that every time you "cache" something that this is a choice, and there may be an alternative way of structuring things that can make the cached data be more transient or simply be recomputed is one of the most important ways to reduce complexity in software. In most any language, you can structure code to be "as functional as possible", within practical limits.
I’ve not seen many actual developers do this. Not actual people who develop software for a living.
There has been a resurgence of the idea that it is not the only paradigm out there.
Functional programming, first seen in the 1950s, has had an uplift in languages that enable its use. Or at least parts of it.
This approach swaps out some trade-offs in OOP related to things like concurrency and program structure.
I think this has created a broader interest in how we design programs and what techniques are available.
That’s a good thing.
Actual hate on OOP - or other techniques - is a bit daft. You run
I’ve not seen many actual developers do this. Not actual people who develop software for a living.
There has been a resurgence of the idea that it is not the only paradigm out there.
Functional programming, first seen in the 1950s, has had an uplift in languages that enable its use. Or at least parts of it.
This approach swaps out some trade-offs in OOP related to things like concurrency and program structure.
I think this has created a broader interest in how we design programs and what techniques are available.
That’s a good thing.
Actual hate on OOP - or other techniques - is a bit daft. You run the risk of missing a valuable tool for the tool belt.
It’s hard to praise too highly the programming languages that are the bridge from one way of looking at programming to much better ways of looking at programming. The two greatest such in the 60s were Lisp and Simula. Perhaps the greatest single conception of a software system of the 60s was Sketchpad.
As explained elsewhere on Quora, and in “The Early History of Smalltalk”, I had chance encounters with Sketchpad and Simula in my first week of grad school in late 66, that shocked me into a realization about “computers as basic and universal units” via the connections and parallels with other li
It’s hard to praise too highly the programming languages that are the bridge from one way of looking at programming to much better ways of looking at programming. The two greatest such in the 60s were Lisp and Simula. Perhaps the greatest single conception of a software system of the 60s was Sketchpad.
As explained elsewhere on Quora, and in “The Early History of Smalltalk”, I had chance encounters with Sketchpad and Simula in my first week of grad school in late 66, that shocked me into a realization about “computers as basic and universal units” via the connections and parallels with other like things, such as biological structures, computers on networks, processes in time-sharing systems, general systems of parts intercommunicating, and so forth. I started to think about dynamic languages to make such processes, and how the processes could be made efficient and parsimonious enough to be universal.
Someone asked me what I was doing, and without thinking, I said “object-oriented programming”. (A very bad choice as it turned out, for many reasons.)
In the first years of the 70s at Xerox Parc with the great help of a terrific research group, and especially Dan Ingalls, we were gradually able to solve the software engineering problems to make this “systems approach” to programming practical (and especially on the emulation architecture hardware of Chuck Thacker). The starting place was how Lisp systems were built and made efficient.
This led to real power of expression, and a number of breakthroughs at every level, from the user interface to the metal.
This attracted great attention, and 10 years later in the 80s “object-oriented languages” started to appear. The only ones that had some of the same flavor as Smalltalk were several versions of Lisp. There were several “object Pascals”, even an “Object COBOL”!, and of course, C++.
It’s important to realize that C++ was part of a chain of ideas starting around 1979 by Bjarne Stroustrup who was also shocked into action from encountering Simula. He was not trying to “steal from Smalltalk” in any way. Here’s what he says in his history paper:
C++ was designed to provide Simula’s facilities for program organization together with C’s efficiency and flexibility for systems programming. … The goal was modest in that it did not involve innovation, and preposterous in both its time scale and its Draconian demands on efficiency and flexibility.
Elsewhere, he takes pains to say that he’s “not trying to do what Smalltalk at Xerox Parc did”. He was essentially trying to do with C what Simula did with Algol.
His approach was via “Abstract Data Types” (which the co-inventor of Simula — Dahl — also liked), and this is the way “classes” in C++ are generally used. And, similar to Simula being a preprocessor to Algol, C++ was a preprocessor to C: the “classes” were program code structuring conventions but didn’t show up as objects during runtime.
And for a variety of reasons, some of them not good, some reasonable, C++ got very popular.
It became part of the “colonization” of the term “object-oriented”. By the end of the 80s we could not explain the software at Parc as being “object-oriented” because the term had become co-opted. This kind of thing happens a lot in many areas. So it was understandable, and Bjarne certainly wasn’t to blame. The term became a kind of fad.
But it was — and is — quite annoying. I had to start calling the Parc stuff “real objects”, etc.
The push-back on complaints was the same one for pop changes in language usage and meaning i.e. “the people determine what a word or term means”.
But the -ideas- of OOP that worked so well for us at Parc came from many places, and we liked Goethe’s quote: “We should all share in the excitement of discovery without vain attempts to claim priority”. We did what we did, and we called it what we called it.
Hence the quote, which is not so much about C++ per se but about the term that we had been using to label a particular approach to program language and systems design.
—— Note: Quora tends to bury comments, so I’m copying an interesting one here along with my reply:
Abhinav Sharma: Thanks! I’m curious if you considered rebranding “real OOP” once the term had gotten colonized? I realize it must feel sad to have the term you coined get misconstrued but do you think a rebranding would be a good way for the ideas themselves to live on?
AK: I think the ideas are living on for historical purposes for those who are interested. And something can be learned about human nature by comparing implemented artifacts from the past with what got accepted and why.
On the other hand, the 70s idea of “real OOP” was hugely powerful back then, but what was implemented was far from a complete set of ideas, especially with regard to scaling, networking, etc. Dave Reed’s MIT thesis happened in 1978 and this dovetailed with ideas I thought important also, but funding and other circumstances delayed implementing his version of “real objects” until the early 2000s (this was Croquet).
How dynamic objects intertwined with ontologies and inference was explored by Goldstein and Bobrow at Parc. Their 4 papers on PIE and their implementation were the best extensions ever done to Smalltalk, and two of the ideas transcended the Smalltalk structure and deserved to be the start of a new language, and perhaps have a new term coined for it.
One way to look at this is that computers have enormous degrees of freedom and capacity, and this makes it very difficult to come up with great tools to program them. This is a mismatch with how humans learn and particularly with how we learn and do skill based activities. The latter tend to be very conservative, whereas the demand of computing is that we have to shed our skins every few years (and we don’t).
I don’t think that “real OOP” as we thought of it then, is the way to go in the future (and didn’t then). Consider Sketchpad, and that it is programmed in terms of constraints that the system solves to guarantee a very close fit between what was desired and what is manifested. This is an early glimpse into “requirements-based programming”. It has something like objects — hard to get away from the main idea — but is “relational” rather than message-based (the messages are implicit, etc.) Sketchpad was a tour de force in many ways, including its solvers. But they didn’t go far enough to handle general requirements. Today I think this is doable via a half dozen new techniques plus enormously larger machine capacities.
Stuff like this is what we should be working on!
No, by most measures it is a success. How would you determine whether a programming paradigm is successful? Obviously there are several ways:
1. Has it improve the maintainability of large software projects? Yes. I have the pleasure of seeing a very large code base 90% of which is in C on a daily basis. Had it been written in object oriented C++, it would be substantially more maintainable than it is. The principle problem with imperative, non-object-oriented programs is the proliferation of uncontrolled state - once the code base is large enough that no single person understands it, there are
No, by most measures it is a success. How would you determine whether a programming paradigm is successful? Obviously there are several ways:
1. Has it improve the maintainability of large software projects? Yes. I have the pleasure of seeing a very large code base 90% of which is in C on a daily basis. Had it been written in object oriented C++, it would be substantially more maintainable than it is. The principle problem with imperative, non-object-oriented programs is the proliferation of uncontrolled state - once the code base is large enough that no single person understands it, there are no guarantees that variables your code references will not change whenever you call any function. Object oriented style forces the containment of state in a way that makes it predictable when it will change.
2. Has it made programming easier? Yes. Beginners find it much easier to think about collections of objects evolving over time, since it allows us to apply intuition from the physical world that is usually correct. For experts, self-contained classes are much easier to reason about than uncontrolled gobs of state.
3. Is it widely used? Yes. Its very unusual to hear of a new project being started in a non-OO language, and its equally unusual for a new language to appear with no OO features. Go and Clojure are not normally thought of as object-oriented languages but both have significant object oriented features.
4. Has it encouraged software reuse? Yes, but not as much as it was claimed that it would. Writing code to an interface does encourage reuse within projects and for very general components such as collections libraries and GUI widgets. However, because each interface is essentially its own little private language, its impossible to exploit common patterns to reduce code in ways the original designer did not intend. In functional style, we can pass anonymous functions around to make use of data in unexpected ways. In object oriented style, we pass objects around and call their interfaces, but this is only possible if everyone already knows what the interface looks like.
5. Is it more expressive? No, not usually. Purely object oriented programs tend to be longer than their non-OO imperative equivalents, and much longer than their functional equivalents, because you need many little functions because your state is broken up into little objects, but you do not have the generally reusable abstractions that make functional programs so short. However, in some cases because OO interface polymorphism allows dispatch of functions by type, it can lead to more expressive code. This is especially true for graphics and for simulation where families of objects with different responses to common interfaces are a very natural way to work.
So that's about 3.7 out of a possible 5 points, which isn't bad. Critics of object oriented programming are usually referring to one of the last two points where OO is only marginally successful, but missing the important contributionts its make to maintainability, ease of use and the proliferation of better programming languages. It is definitely true that OO - especially Java and the OO dialect of C++ - have encouraged verbosity and discouraged the use of powerful, general abstractions in favour of very specific ones, but it is also true that the new generation of languages that has followed are still for the most part object oriented languages with more powerful features and fewer restrictions, not a return to Common Lisp, C and BASIC.
Dijkstra is a pretty smart guy, who has seen a lot. It is worth understanding what he thinks. I cannot claim any special knowledge of Dijkstra’s intent, however he has previously not been a huge fan of human ability.
I think he is saying that while software is eating the world, AI is eating the software employment world. The number of thoroughly professional software developers is not growing very fast, while a tsunami of semi-educated developers assisted by AI is attacking the job market like a zombie horde.
He may also be feeling that employers do not really desire professional developers beca
Dijkstra is a pretty smart guy, who has seen a lot. It is worth understanding what he thinks. I cannot claim any special knowledge of Dijkstra’s intent, however he has previously not been a huge fan of human ability.
I think he is saying that while software is eating the world, AI is eating the software employment world. The number of thoroughly professional software developers is not growing very fast, while a tsunami of semi-educated developers assisted by AI is attacking the job market like a zombie horde.
He may also be feeling that employers do not really desire professional developers because they expect too many things, like a salary, and to be treated with respect. Employers would prefer something more easily manipulated, a blue-collar kind of developer willing to work long hours for sociopathic managers on death-march projects at low wages. Or that could just be me :-).
I’ll take a different view.
Dijkstra has a specific goal in mind when discussing professional status. Specifically, to him, a professional is trained extensively to solve problems methodically. If you need a bridge from point A to point B, a professional civil engineer knows the complete set of design alternatives that the profession has worked out. He will evaluate each of those alternatives and rationally choose the best one for the situation. Another professional engineer, when confronted with the same problem, would come up with the same solution. No creativity. No art. No fuss. No bother.
I’ll take a different view.
Dijkstra has a specific goal in mind when discussing professional status. Specifically, to him, a professional is trained extensively to solve problems methodically. If you need a bridge from point A to point B, a professional civil engineer knows the complete set of design alternatives that the profession has worked out. He will evaluate each of those alternatives and rationally choose the best one for the situation. Another professional engineer, when confronted with the same problem, would come up with the same solution. No creativity. No art. No fuss. No bother. Just pure methodical workmanship.
Programming today is far from that. Give a problem to a dozen programmers, and you will not get back even two similar designs. They will be wholly unique.
Dijkstra wanted programming to be more than craftsmanship. He wanted it to be methodical. A discipline. He even wrote a book on it:
So what Dijkstra is lamenting is that programming is much more likely to become obsolete due to being taken over by AI than it becoming a true rigorous discipline as he envisioned it.
And I am forced to agree.
Footnotes
Computer scientists did not come up with object-oriented programming before procedural programming because humans don't normally start with the most sophisticated ideas first. Humans start with simple ideas and then work their way up to sophisticated ideas.
THE DEVELOPMENT OF HIGH-LEVEL PROGRAMMING LANGUAGES
Digital computers work by executing a long list of instructions one instruction at a time, but it can execute 1,000,000,000 instructions per second.
Computer programmers write algorithms consisting of long lists of instructions initially consisting of only 4 types of instructions arranged in
Computer scientists did not come up with object-oriented programming before procedural programming because humans don't normally start with the most sophisticated ideas first. Humans start with simple ideas and then work their way up to sophisticated ideas.
THE DEVELOPMENT OF HIGH-LEVEL PROGRAMMING LANGUAGES
Digital computers work by executing a long list of instructions one instruction at a time, but it can execute 1,000,000,000 instructions per second.
Computer programmers write algorithms consisting of long lists of instructions initially consisting of only 4 types of instructions arranged in various ways to accomplish something such as being a Word Processor.
#1 A typical modern digital computer has only 4 types of instructions:
1) It can move a number from one place to another place: memory locations and input/output.
2) It can perform simple computations such as adding and multiplying numbers.
3) It can do conditional jumps, and
4) It can execute subroutine calls and returns.
#2 Assembly language took the basics of digital computers and their instruction sets and put it into a mnemonic language that made it easier for programmers to code computer programs. However, assembly languages were unique to each digital computer.
#3 High-level language took assembly languages and created 4 abstractions that made programming even easier and allowed the same program to run on any digital computer that had a compiler created for it.
1) Variables and more complex data structures are a useful abstraction for the numbers stored in memory locations. These are the data in data processing.
2) Computational statements use variables that add, multiply, other computations and then store the result in another variable. These are called assignment statements which do all the computation in computer programs.
3) Flow control logic uses the conditional jump instruction to implement loops, if-then, do-while, and switch-case statements using the variables. These allow programmers to create the flow control logic of computer programs, and finally
4) Functions (subroutines) use the call – return instructions and the stack and take input variables and produce output variables and perform other useful work. Functions are the worker-bees of computer programs.
WHAT WAS WRONG WITH THE OLD HIGH-LEVEL LANGUAGES?
The old high-level languages had a fixed number of types of variables, although one could create an infinite number of types of functions. They also had no good way of organizing software, no good method of storing and re-using functions.
OBJECTS
To fix those 2 problems, the object (aka, class) was invented. Objects replace modules and are used by modern programmers as their universal organizing mechanism. Objects can also be used to define any type of data structure/variable which work as though they were native to the programming language. Programming with a high-level language that contains the object mechanism is called object-oriented programming.
RUNTIME ENVIRONMENT
The invention of the runtime environment which manages all objects in the computer’s library of objects and lies between the program and the operating system was the next great improvement in modern programming.
CONCLUSION
The above was the highly-simplified version of the development of modern high-level language leaving out the development of the modern operating system which is essential in modern programming. I enjoyed developing the real-time multi-tasking operating system in the mid-1970’s.
I also left out the development of functional programming for simplification purposes. You may have noticed that I also left out all mention of the programming paradigms. I did that because those paradigms are a viewpoint created after the development of modern programming. I find them to be nonsense, a figment of somebody’s imagination. But if other programmers find paradigms inspirational and useful, that is good. They don’t change the reality of modern programming because modern programming is what it is.
I hope my short essay helps you understand how modern computer programming evolved—which I personally witnessed decade after decade.
I’ve seen this. It falls under one of my favorite topics to write about, jerk programmers.
Most of the hate comes from Haskellers. Not from FP folks, but from Haskellers specifically. They insist that Haskell is wonderful (it is) and that it’s based on easy-to-grasp principles (it’s not) and that it solves every problem well (it doesn’t).
There are many poorly-designed, over- or under-engineered OO systems. This is partly because OO allows cruft to accumulate, but it’s more that OO is the dominant paradigm and has been for nearly 30 years.
OO is programming by metaphor. People get introduced to m
I’ve seen this. It falls under one of my favorite topics to write about, jerk programmers.
Most of the hate comes from Haskellers. Not from FP folks, but from Haskellers specifically. They insist that Haskell is wonderful (it is) and that it’s based on easy-to-grasp principles (it’s not) and that it solves every problem well (it doesn’t).
There are many poorly-designed, over- or under-engineered OO systems. This is partly because OO allows cruft to accumulate, but it’s more that OO is the dominant paradigm and has been for nearly 30 years.
OO is programming by metaphor. People get introduced to metaphor in middle school. FP is programming by abstract algebra. Most people, even most computer scientists (and certainly most developers) are never introduced to abstract algebra. That’s fine. The amount of advanced algebraic topics you must understand to fluently use Haskell is not large, but most people have none of the necessary background.
All software is “done” when time or budget (or both) run out. That’s the nature of professional development. Get used to bad code. If you can put together a long-lived, tightknit team of likeminded developers who are experts in your preferred tools/tech stack, you’ll make something you like. However, that’s not the reality of development.
A pretty long time ago I used to hate OOP because it’s typically taught mixed with the Object-Oriented Design. They need to be considered separately.
OOP is a way to control complexity. It lets you write the larger programs easier, faster and more reliably. The major reason for this is that it makes you think more carefully about the interfaces and limit the number of cross-dependencies. This carries some overhead, so the problem needs to be large enough to get the net benefit from it. That’s why the typical textbooks on OOP look so dumb: they try to show the use of OOP on the small examples th
A pretty long time ago I used to hate OOP because it’s typically taught mixed with the Object-Oriented Design. They need to be considered separately.
OOP is a way to control complexity. It lets you write the larger programs easier, faster and more reliably. The major reason for this is that it makes you think more carefully about the interfaces and limit the number of cross-dependencies. This carries some overhead, so the problem needs to be large enough to get the net benefit from it. That’s why the typical textbooks on OOP look so dumb: they try to show the use of OOP on the small examples that would be much better off without it. You can do OOP in the non-OOP languages too, but you would get even more overhead, the OOP languages condense a good deal of the overhead into the concise statements.
OOD basically says: now that we have OOP to control complexity, you don’t need to think about the design! Just make everything an object and connect them in every which way, and you’ll get a working program! In reality this doesn’t work. The books on OOD demonstrate their point with the small and extremely ugly examples that kind-of sort-of work. But once the scale of the problems goes up, the OOD generates complexity much faster than OOP can handle it. The “big pile of crap” non-design just doesn’t work.
So, the summary: OOP good, OOD bad, and you need to be able to tell them apart.

THe problems with OOP are:
- It is incredibly distracting. It is very easy to end up in situation where you’ll be doing “illusory work”. For example, you can sink massive amount of time trying to come up with “perfect” (meaning “beautiful”) object hierarchy. Whil you’re busily splitting/merging classes, functionality of your project does not improve, you’re essentially wasting time while being fully convincing that you’re being useful.
- There are people who treat it like religion. I’ve seen an article (discussing C# extension methods) saying, basically “I’ve found a way to write clean, maintainable
THe problems with OOP are:
- It is incredibly distracting. It is very easy to end up in situation where you’ll be doing “illusory work”. For example, you can sink massive amount of time trying to come up with “perfect” (meaning “beautiful”) object hierarchy. Whil you’re busily splitting/merging classes, functionality of your project does not improve, you’re essentially wasting time while being fully convincing that you’re being useful.
- There are people who treat it like religion. I’ve seen an article (discussing C# extension methods) saying, basically “I’ve found a way to write clean, maintainable and easy to read code, but it wasn’t OOP, so I concluded that it is evil and wrong”. This is just ridiculous.
With all that in mind…
OOP is a useful tool, when applied in the right place. However, you should not attempt to apply OOP to every single problem, it is entirely possible that an easire to read non-OOP solution may exist for a problem you’re dealing with. This is why there are programmers who recommend you to start with C - so you get acquianted with simpler approach first.
In the end, if you ever want to rely on some principle, I’d recommend to stick with KISS and YAGNI.
Basically.
- A functionality should be only implemented when it is necessary. (YAGNI)
- If it is not necessary, it should probably scrapped.(YAGNI)
- Features you implement should be implemented in simplest most straightforward way (KISS).
- With all that in mind, you should aim to keep the code easy to read, easy to maintain and free of bugs.
The logic behind that is that keeping things simple (KISS) and removing/not implementing unnecessary functionality (YAGNI) will reduce size/complexity of the project, thus reducing both maintenance cost and chance of getting new bugs.
It is a level of abstraction that seperates a programmer from precise control of the machine code that runs on the CPU.
Often precise control is not needed or wanted. Any computer language beyond machine code contains a level of abstraction, even assembly code.
Machine code is needed when developing a compiler or device driver but beyond that it is usually not used by modern programmers.
Saying that
It is a level of abstraction that seperates a programmer from precise control of the machine code that runs on the CPU.
Often precise control is not needed or wanted. Any computer language beyond machine code contains a level of abstraction, even assembly code.
Machine code is needed when developing a compiler or device driver but beyond that it is usually not used by modern programmers.
Saying that object oriented progra...
There is no direct path from “Boolean Logic” hardware to the deep software of “Smalltalk.”
The concept of a “electric calculation” came from the writings of George Bool, call “Boolean Logic.”
From boolean logic came “adder circuits.” With a sequencer using a plug board or paper tape, the adder circuits could do complex calculations. The Harvard Mark I, a relay computer with paper tape and the faster Eniac, a vacuum tube calculator with plug boards.
The earliest computers did not have “message passing” or even subroutines.
From Turing’s papers from 1936, we get the “Universal Turing Machine” which
There is no direct path from “Boolean Logic” hardware to the deep software of “Smalltalk.”
The concept of a “electric calculation” came from the writings of George Bool, call “Boolean Logic.”
From boolean logic came “adder circuits.” With a sequencer using a plug board or paper tape, the adder circuits could do complex calculations. The Harvard Mark I, a relay computer with paper tape and the faster Eniac, a vacuum tube calculator with plug boards.
The earliest computers did not have “message passing” or even subroutines.
From Turing’s papers from 1936, we get the “Universal Turing Machine” which leads to first “stored program computer.”
The Cambridge EDSAC is considered the first “stored program” computer which ran its first program in May 6, 1949. It had 512 words of 18 bits and 14 instructions. It didn’t even have a procedure call instruction.
No objects or message passing here or even a subroutine.
The first commercial computer, Univac I, was installed in 1951. Had had 1,000 words of 12 decimal digits (BCD) and a sign bit.
Lets skip a decade.
SIMULA is considered the first programming language with “objects”.
It appeared in 1962 and ran on a Univac 1107. It had 64K (65,536) 36-bit words. It did have a subroutine instruction called “Store location and Jump”. It put the return address at the destination address and then started executing at the next location.
Skip to 1968.
Alan Kay envisions “Dynabook”, “A personal computer for children of all ages.” From his experience with the B5500, a tagged architecture, stack based computer, he started building “Smalltalk”.
There is no direct path from “Boolean Logic” hardware to the deep software of “Smalltalk.”
Since the question has been asked, it’s worth reading the detailed history I was asked to write by the ACM in 1992, that became one of the sections of the 2nd History Of Programming Languages conference. The Early History Of Smalltalk
In brief for here, I saw parts of the idea in various forms starting in the early 60s, and thought it useful, but stayed asleep until in 1966 I saw Ivan Sutherland’s Sketchpad system (which completely changed the ways I looked at computing), and within a week saw and learned the first Simula, which was less grand than Sketchpad, but showed how ordinary programming
Since the question has been asked, it’s worth reading the detailed history I was asked to write by the ACM in 1992, that became one of the sections of the 2nd History Of Programming Languages conference. The Early History Of Smalltalk
In brief for here, I saw parts of the idea in various forms starting in the early 60s, and thought it useful, but stayed asleep until in 1966 I saw Ivan Sutherland’s Sketchpad system (which completely changed the ways I looked at computing), and within a week saw and learned the first Simula, which was less grand than Sketchpad, but showed how ordinary programming could be changed to take advantage of instantiations of processes.
This double whammy combination “rotated” me to see things from very different perspectives.
A key part of the “rotation” was that
(1) at that time multi-processing and time-sharing systems were using hardware modified to isolate separate processes in the form of “virtual versions of the hardware”
(2) ARPA was in the process of talking about doing the ARPAnet, that would allow many computers to intercommunicate
(3) my two main concentrations in college had been pure math and molecular biology
The form of the “rotation” was ridiculously simple. It was the simple realization that a computer could compute what any computer could compute, and thus you could represent anything computable at any scale using only intercommunicating computers (most would be virtual) as building blocks.
This was completely impractical (which I think was one of the reasons I didn’t think of it earlier). The molecular biology and the ARPAnet really helped, because it was known in the mid-60s roughly that each cell in our body contained billions of informationally interacting components, and we had 10 to 100 trillion cells in each of us. That kind of scaling actually worked, and was far beyond what computing could do.
I think that seeing Sketchpad shocked me into being able to use “pure math mode” as part of the thinking rather than just the “worry about efficiency” thinking I was used to doing when computing. If you allowed “infinitely fast and large” computing, then the idea made excellent sense: it was a universal building block for all scales, and what remained were the central problems of designing complex systems.
The nature of the intercommunications would allow schemes that were like algebras in pure math to be devised so that terms — like “+” or “sort” or “display” could have both general and specific meanings.
The huge potential got me to look at the “impractical” part, which looked much more doable than I’d thought (it still took about 5+ years and a great research group to do). LISP had already solved a number of the problems, and this proved to be a great set of ideas for context.
In the 1960s, software composites that were more complex than arrays, were often called “objects”, and all the schemes I had seen involved structures that included attached procedures. A month or so after the “rotation” someone asked me what I was doing, and I foolishly said “object-oriented programming”.
The foolish part is that “object” is a very bad word for what I had in mind — it is too inert and feels too much like “data”. Simula called its instances “processes” and that is better.
“Process-oriented programming” would have been much better, don’t you think?
In any case, I did not at all have “Abstract Data Types” in mind as a worthwhile goal, even though they were obvious — and this is because “Data” as an idea does not scale at all well.
You are much better off hiding how state is handled inside a “process”, only having processes, and treating processes as “servers” for each other.
That is what I had in mind back then.
As language for teaching computer science, Haskell has a lot to offer that Java does not. It is a declarative language and takes an approach to programming that is much closer to mathematics. In Haskell, you define what things are in relation to their arguments, whereas imperative languages like java, you describe processes.
Describing a process can be more practical for some tasks, but declarative programming forces you to program in a way that is easier to reason about because values are calculated based solely on inputs, not on state. Provability, while not needed for every programming task,
As language for teaching computer science, Haskell has a lot to offer that Java does not. It is a declarative language and takes an approach to programming that is much closer to mathematics. In Haskell, you define what things are in relation to their arguments, whereas imperative languages like java, you describe processes.
Describing a process can be more practical for some tasks, but declarative programming forces you to program in a way that is easier to reason about because values are calculated based solely on inputs, not on state. Provability, while not needed for every programming task, is something students need to learn. It’s very important for software that is not allowed to fail ever.
Haskell teaches the discipline of stateless programming, and that’s a really, really good thing for students to learn. It also teaches about category theory, which helps.
Basically, Haskell was created by academics as a pure way to express many of the foundational principles behind CS.
Java, by contrast, was created by a company to give programmers and easy, familiar-feeling way to do cross-platform development.
It’s not surprising that one would be better for teaching CS, while another would be preferred by businesses that want a language where programmers can be treated as interchangeable parts.
OOP, in most languages, introduces information hiding. This is an advantage because it lets you focus on one thing at a time instead of trying to walk through the control flow. Most of the time, if you are trying to track control flow, then you are trying to do the wrong thing. You should worry about the state and behavior of the objects so that there is no need to track control flow. It takes practice to get used to this, but you should give it a try.
Of course for every rule there is an exception. When writing device drivers and other low level code that deals directly with hardware, there is
OOP, in most languages, introduces information hiding. This is an advantage because it lets you focus on one thing at a time instead of trying to walk through the control flow. Most of the time, if you are trying to track control flow, then you are trying to do the wrong thing. You should worry about the state and behavior of the objects so that there is no need to track control flow. It takes practice to get used to this, but you should give it a try.
Of course for every rule there is an exception. When writing device drivers and other low level code that deals directly with hardware, there is often a temptation to track control flow. That is probably one reason why people (such as Linus Torvalds) prefer C and do not like OOP. But I have found that, even in such low level code, I prefer OOP as it makes the code easier to maintain.
It’s very difficult to do sufficient justice to Edsger. Besides being a foundational computer scientist — in the original strict senses of the term — he was also one of just a few critical gadflies for our whole field, and especially both CS and SE.
One way to think about a great person is that they are “great and interesting” whether they are right or wrong, or in between. This is because it is their unique perspectives that really count: they help us see from directions we couldn’t.
Those whose tiny sense of self and outlook allow themselves to be insulted by the great gadflies completely miss
It’s very difficult to do sufficient justice to Edsger. Besides being a foundational computer scientist — in the original strict senses of the term — he was also one of just a few critical gadflies for our whole field, and especially both CS and SE.
One way to think about a great person is that they are “great and interesting” whether they are right or wrong, or in between. This is because it is their unique perspectives that really count: they help us see from directions we couldn’t.
Those whose tiny sense of self and outlook allow themselves to be insulted by the great gadflies completely miss what “great” and “gadfly” and “progress” are all about.
One of the difficulties in dealing with the ideas of a great person is to be distracted by some of the “really good stuff” they are able to accomplish — which tends to be more rooted in an historical time — and to miss the “really great stuff” they are advocating — which tends to be for much larger durations.
In other words, the bug is to mistake the artifact for the bigger underlying idea. This has happened with a number of greats, including Dijkstra, Bob Barton, Ivan Sutherland, Doug Engelbart, etc.
One of Edsger’s greatest lifelong interests was to find out how to design and write programs that really worked and were “bugfree” — including “knowingly and meaningfully” bugfree.
Following the idea that a programmer *should* have enough of a goal in mind to both try to write a program to do something, *and* to be able to tell if it works as intended, what he wanted was to find ways to *get the goal out of the programmer’s mind and into the code*.
Most goals are predications, and computers present the twin problems of *representing* predications and *running* them. The systems that could do this really well — like Sketchpad — though incredible for any time — were not general purpose enough to do either comprehensibly enough.
So the question is “how can you organize — and confine — what the computer does do to approximate “runnable math”. The vast degrees of freedom of computing are fabulously powerful: how to be able to use as many as possible really safely?
Edsger came up with numerous approaches to this, many of them important. With reference to the question here, he was not looking for a religion or for “golden methods” (this was more the way Wirth thought about this).
You can learn a lot about the way he did think about things by looking at his extremely early Algol 60 compiler, followed by his approach to the THE operating system for the EL X8 computer.
One of the big ideas of the 40s — especially 50s — is that the notion of a mathematical function could be imitated very well, if done with care, and — even better — that the idea could be widened to provide a machine independent abstraction for an idea, and especially for programs that could accomplish goals. This created ways to create a design language and scheme with separated concerns for design, but provided ways to combine the concerns in higher level more goal oriented ways.
This quickly gave rise to several different kinds of programming cultures: one pretty ad hoc, and one “like Dijkstra’s” i.e. an attempt to be as “scientific” and “engineering” as possible with respect to design and building.
The “like Dijkstra” one had the disadvantage of not knowing how to “do everything right” but it did have many starts to deep and correct criticisms of the other culture — which pretty much did everything “wrong” and were so blinded by pragmatism they couldn’t see it.
I think these two extreme cultures are very prevalent today, and both are quite entrenched — from many reasons besides personality leanings — I think there is a lot of the “loss aversion” and “sunk cost fallacies” hurting progress on both sides as well.
A way to end this too long answer is to look at Edsger’s quip that “programming is logical brinksmanship”, and realize that the largest psychological difference between the two factions is that his strict computing and engineering approach wanted to avoid brinksmanship via more care and art. The term “structured programming” as used by him was larger than what it degenerated into.
A companion quote from his contemporary Tony Hoare is: “Debugging is harder than programming, so don’t use all of your cleverness writing the program!” It’s a reasonable generalization to say that many of the computer scientists, software engineers, language designers back then were trying to find way to write programs that could be reasonably debugged. It’s also fair to say that most programmers today code expecting their program to run and be easily debugged (this doesn’t work well).
Another quote along these lines of Edsger’s that is quite telling 60 years later today is:
At least in Holland, the intellectual level needed for systems design is in general grossly underestimated. I am more than ever convinced that this type of work is *just difficult*, and that any effort to do it with other than the best people is doomed to either failure or moderate success at enormous expenses.
All the best systems designers I’ve known over the years have put a lot of effort into *confinement* — and have come up with numerous mechanisms both hardware and software — some of which are completely critical today.
By in large, most programmers violate useful and needed confinements right and left regardless of the tools they have (some of which can really confine well). Or what is done is via extreme kludgery (naming 5 of these right now will help understanding of this note)
The HW vendors especially recently have done very poorly dealing with and helping these issues.
And — it would be great to have those who do realize the importance of confinements to devise much better tools that fit the needs and scalings of our time.
This is already much too long, but I can’t resist telling “The Sphere Story”. One of the lab managers for a few years at Parc was quite a character. He had been the Chief Scientist at a large company, and while there decided that he would like a stainless steel sphere on his desk for a paper weight. He ordered a 4″ one from the shop and eventually forgot about it. 10 months later it showed up with an invoice for $10,000 (about $85K today). After screaming at them on the phone they informed him that he only said “4 inch diameter” with no tolerances, so ….
One of the many reasons that Sketchpad worked so well is that Ivan Sutherland was an engineer’s engineer, and engineers work to tolerances. He decided to make Sketchpad worth within reasonable tolerances e.g. minimizing the least square error fits of the multiple constraints. This allowed very difficult non-linear multidimensional problems to be handled right off the bat.
Compare this with the desire of mathematicians to have “not-true” mean “false” (disclosure: I have a degree in pure math). I remember being shocked by many things in Sketchpad — and this was one of them!
And compare this with the amount of “noise-limiting and correction” in almost all other parts of computing. If you are “thinking system” you are going to have noise and will design the system to work in spite of the different kinds of noise present (including human beings!). It’s that a very different kind of mathematical thinking has to be done besides “A is not not A”.
Note that this doesn’t invalidate Edsger’s large goals, but it shows how poorly the early conceptions of programming have scaled. This non-system view is present in most programming languages in heavy use today, and just doesn’t match up to what is actually needed.
Let me answer by assumption. Alan Kay was a scientist atXerox PARC. Object-Oriented programmings was invented there, and the first OO language was Smalltalk.
Smalltalk is elegant, simple, well designed…. Everything a nice programming language should be.
C++ on the other hand is… well.. all the contrary. It’s confusing, messy and awfully designed. It’s C forced to be object-oriented.
I don’t mean C++ is not powerful. It is. Nor bad. There’s a reason why many people prefer it.
But I would assume Alan Kay is comparing C++ with Smalltalk.
Summary: they were always thought of as “all sizes” — this is what messaging allows one to think — but it took awhile to invent all the software engineering needed to make the nice idea practical enough for real system building.
Though I had been a journeyman programmer for a few years, and had seen part of the object idea a few times (B220 file system, B5000, etc.), it was Ivan Sutherland’s Sketchpad system ca 1962 that got me thinking about modeling and seeing the first Simula a few days later (how to do something like OOP by instantiating Algol-60 blocks) that got me to see the analogies to
Summary: they were always thought of as “all sizes” — this is what messaging allows one to think — but it took awhile to invent all the software engineering needed to make the nice idea practical enough for real system building.
Though I had been a journeyman programmer for a few years, and had seen part of the object idea a few times (B220 file system, B5000, etc.), it was Ivan Sutherland’s Sketchpad system ca 1962 that got me thinking about modeling and seeing the first Simula a few days later (how to do something like OOP by instantiating Algol-60 blocks) that got me to see the analogies to (a) cells in Biology, and (b) algebras in math.
I was in a maths mood at the time, so it was easy to see that a virtual computer could model anything on a computer — no matter how large or small — and that they could be recursively combined using messaging. This provided a really simple framework of VMs (with the internals being VMs) all on a point to point network (like the logic of the later ARPA and Internets).
The state of software engineering in 1966 was not advanced to either the very large or very small idea being rendered as an object. Most of the things “like objects” had a fair amount of overhead, and were thus rather large entities.
But the “maths” idea was too nice to give up. Meanwhile, I encountered LISP, which was not just a “neat mood” but an actual example about how to do both some of the maths and some of the software engineering needed.
For example, one way to look at passing a message is by thinking slightly differently about APPLY in the context of an FEXPR LISP. Another example is that most LISPs since 1.5 have mapped small integers into the protected LISP “entity space” (and the B5000 maps all numbers into the protected address space, etc.).
The FLEX language (for the FLEX machine of the late 60s) borrowed more SW ideas from Wirth’s Euler than anywhere else, and also used ideas from Shorre’s Meta II and the Floyd-Evans method of parsing.
Dan Ingalls was my indispensable partner for the Smalltalk project at Parc. The first Smalltalks were rather Lisp like, and then — as we got more skilled at the SE, were able to use ideas from more sources (including the FLEX machine) and from original inventions.
We could have done a bit more for arbitrarily large objects, but — since our goal was inventing modern personal computing, etc. — we mainly aimed at “clean, small, simple, and powerful” for Smalltalk. This worked out well for our project at Parc.
However, it’s worth emphasizing that messaging and late binding allow pretty much all improvements within this kind of architecture to be done while the system is in use (and without having to be stopped in order to make any changes, fixes, or improvements).
Note: Instantiation of an “idea” is a very good idea. “Inheritance” needs to be really controlled in order to not turn into a nightmare (this is why I left it out of the first Smalltalk — Simula I didn’t have it, and Simula 67 did). It was pretty easy to see that “some things could be accomplished” with not just inheritance but multiple inheritance, but the potential for *mess* was huge. Mathematically, it is a good thing to contemplate — but as a large idea that is part of questions about abstractions.
I’ve noted elsewhere that “object-oriented” to me is not a programming paradigm, but a definitional scheme — it’s a way to making things that carry their definitions with them. A programming paradigm needs to include enough design principles to guide designers though complexity and scaling. Messaging between instances of ideas is part of this. Experience would lead me to look at compositions before inheritance (the PIE system of Goldstein and Bobrow) used Smalltalk in an interesting way to do this. I think I’d look at “modeling time” before looking at inheritance. Etc. But I do think that establishing a tight enough meaning for inheritance could add more clarity than murk. I haven’t seen a good example of this.
We got away with using inheritance in the later Smalltalks at Parc because we — and especially Dan Ingalls — were very careful with it. A later experience with Morphic in Squeak was not so happy.
I’m going to disagree with some answers here. I don’t belief that technical arguments can sufficiently explain the rise of OOP or that they even had a deep impact on this development. It’s a logical fallacy that we tech-folks regularly fall for. We’d like to live in a world where technological advancement is driven by technical arguments. But, this thinking simply doesn’t reflect reality. Reality is multi-faceted and social and economical phenomena are way more influential in these regards than we’re willing to admit (for better or worse).
For the record, I do think that OOP was a great technic
I’m going to disagree with some answers here. I don’t belief that technical arguments can sufficiently explain the rise of OOP or that they even had a deep impact on this development. It’s a logical fallacy that we tech-folks regularly fall for. We’d like to live in a world where technological advancement is driven by technical arguments. But, this thinking simply doesn’t reflect reality. Reality is multi-faceted and social and economical phenomena are way more influential in these regards than we’re willing to admit (for better or worse).
For the record, I do think that OOP was a great technical achievement and that it contributed significant improvements to many aspects of programming. Similarly, functional and logical programming brought many things to the table, but they never managed to gain as much popularity. So, why is that? Is it that OOP is so much superior in every regard to all the other paradigms? Some people really believe this, but this road only leads to evangelistic dogmatism. For most of us it is clear, that there are other forces working in the background that are harder to grasp and that play more important roles.
I have thought about programming history a lot, but I don’t have a conclusive explanation for the rise of OOP either. However, I have some conjectures to share.
Like others have mentioned, it wasn’t until the late 80s and early 90s that OOP really took of besides being around since the 60s. So, if OOP selling-point was really only “managing complexity” and modularization then why the long delay? These characteristics would have been considered highly beneficial even before the 80s. I think Wally Cow made an excellent point in his answer to the question: it was the rise of GUIs that propelled OOP into the spheres where it lives now. Some companies saw a business opportunity, and they knew that their solution had to be very well executed in a competitive environment. So naturally, they invested incredible amounts of money into their solution. The strategy was to obtain market-dominance by sheer financial brute force.
Another point is, that OOP had a great lobby back in these days, and they were aggressively marketing their products. These folks really understood their profession. For instance, I like to think of the neologism “enterprise software” as one of their best marketing-coups. It marked a fictional turning point in history. Before that, people were writing “ordinary” software or “back-office” software, but with their new kid it was now finally viable to program for “enterprises”. Nothing really changed, but everybody now felt an urge to do “enterprise” and to do so you had to jump upon the OOP-wagon. If you didn’t, the only logical conclusion was, that you were developing “small-scale, ordinary applications”. I’m paraphrasing, of course. But, people feel flattered when their work is associated with some important words like “enterprise” or “business critical”. I think some OOP marketers were well aware of this fact.
Finally, an economical phenomenon called “network effects” befell the software-industry. Very roughly speaking, if there are multiple technologies competing for a market, and one of them manages to gain a critical mass, then the market can reach a tipping point at which a slight advantage turns into significant market-dominance. David Rayna mentioned this in his answer. The industry asks for OOP developers, consequently schools teach their students the necessary skills. There are now more skilled developers available on the job market. This creates a positive-feedback loop. More programmers contribute to an ever growing ecosystem of OOP software, which in turn creates a higher demand for skilled OOP programmers, who can maintain it. Eventually, the situation became so evident that we’re now more or less “locked-in”. This means, its very difficult for another paradigm to emerge in this world which dominated by OOP.
He meant “I don’t enjoy programming using Java and its idioms. But Haskell is closer to the way I think. But I don’t want to weaken my argument by sounding like a fanboi”
At least that’s how other people employ similar phrases.
A classic technical sounding retrofit to back up an emotional language decision.
We all do this.