Profile photo for Curtis Poe

Short answer: yes, inheritance is usually a bad practice in OOP.

Longer answer: read on. The subject is complicated enough, and important enough, that there’s really no way to give a brief answer and address these issues.

What follows is a very long read for Quora, but you’ve asked a question that people have argued about for fifty years, so giving you a simple answer would be unfair. Also, because I think this is a very important topic in our industry, I’m going to ask you for a favor. When you’re done reading this, either click “Upvote” or leave an answer saying why you disagree. And please share this. Inheritance has its place, but it’s so problematic that people need to be aware of these issues.

For a more complete answer, let’s go back in time to 1967. That’s when Simula 67 was released, offering the first “modern” object-oriented code. It’s worth keeping in mind that this was almost 50 years ago (as of this writing) and one would think that we’ve learned a lot about OO programming, but in reality, we still have a lot more to learn.

In Simula 67 we had classes, polymorphism, encapsulation, and inheritance. For class-based OO, the first three have withstood the test of time and have been more or less “non-controversial” for five decades. Inheritance was difficult almost from day one. When you have a programming practice that people are arguing and debating over for fifty years, it’s worth taking another look at it. In particular, given that Alan Kay, the inventor of the term “object-oriented programming”, had serious reservations about inheritance, to the point where he left inheritance out of the definition altogether.

Got that? The computer scientist who named OO programming and is considered one of its inventors doesn’t like inheritance. I think we’d be silly to dismiss his concerns.

Inheritance Violates Encapsulation

Here’s just one example of why that’s problematic. As a general rule, if a class is open for subclassing (I’ve noticed that declaring classes as “final” is mostly used as a performance hack and not for concrete business reasons), the class should not know about its descendants, though the descendants, of course, need to know about their ancestors. Now imagine that you have an inheritance chain where Child inherits from Parent inheriting from GrandParent.

Child naturally relies on some behavior from GrandParent, but in reality, it should only be aware of the Parent class. This is because the contract presented by Parent should be inviolate (open for extension, not change) and if Parent later decided to inherit from a different class, Child should still work. In practice, the more you inherit, the more behavior you’re pulling in which you may not need and it’s hard to be sure exactly where that behavior is coming from. This makes deep inheritance chains harder to debug and maintain.

So the author of the Parent class implements a public foo() method after verifying that the GrandParent class does not. The Child class now inherits Parent.foo(), but later the author of the GrandParent class says “I need to implement GrandParent.foo()”. Since a class should generally not be aware of its descendants, they have no idea that this public method will be later overridden. The Child class, meanwhile, realizes they need GrandParent.foo() and not Parent.foo(), but whether or not they’re able to call that method directly depends on the language. If the Child class can call GrandParent.foo() directly, that’s a gross encapsulation violation (they should only know about Parent and Parent should be free to inherit from another class). If they can’t call it directly, they may be forced to cut-n-paste the behavior of GrandParent.foo(), or find some other awkward workaround.

And then GrandChild inherits from Child and wants to implement call Parent.foo(). Oops.

Because you’ve tightly coupled child classes to parent classes, you’ve violated encapsulation and the above is merely one example of how things can go wrong. You can hit Google for many other explanations of why inheritance violates encapsulation.

I might add that people are often in two camps about this. Those who write and release a lot of small modules/libraries/packages/whatever, often don’t experience the above pain. However, those who work on very large scale systems (I do), are more likely to see these problems firsthand. It’s like a beginner writing a 100 line script and not realizing the problems with global variables: certain bad practices don’t scale, but until you scale, it can be hard to see why they’re bad.

Multiple Inheritance is an Abomination

Single inheritance has many issues and I’ve hardly scratched the surface, but multiple inheritance (MI) is so problematic that many languages ban it outright, but try to offer some alternatives to it (e.g., Java and interfaces). Other languages, such as Perl, allow multiple inheritance, but you’re generally warned not to use it. Why not?

Imagine the following inheritance hierarchy:

If your Copier paper jams, you might want to tell it to halt() to prevent damage. Fortunately, you don’t have to write that method because it’s already implemented in Printer. Unfortunately, because inheritance usually is implemented via depth-first search of the inheritance tree, you’ve called the PoweredDevice.halt() method, removing power to the Copier altogether, losing all of the other queued jobs. Awesome!

There is something called C3 linearization which attempts to implement an intelligent breadth-first search of the inheritance hierarchy for the most suitable method for your needs, but all it does is mitigate the problem. It will find the next method that seems to most clearly match what your code is looking for; it can’t guarantee that it will find the code that actually does what you need it to do. Thus, while it’s a well-defined algorithm, it’s still just a heuristic and heuristics break sooner or later. See my previous comments about “scale.”

I might add that some find the term “C3 linearization” to be a little bit too “comp-sci speak” for them, so I offer an alternative name: “mopping the Titanic.” When you have to jump through such extreme hoops to work around a problem, maybe it’s better to fix the problem instead?

What is Inheritance?

A subclass is generally considered to be a more specific version of a parent class. For example, a Dog might be a subclass of Mammal which might, in turn, be a subclass of Animal. Dogs don’t inherit from Trees just because you want a bark() method. Keep that in mind.

While I can’t find the quote right now (apologies), the creators of the Beta programming language only supported single inheritance. When they were researching how to support multiple inheritance (MI) , they discovered that none of the examples they could find were about creating more specific versions of things. Instead, they were about sharing behaviors. So MI in the real world seemed to violate the intent of inheritance in general and the Beta developers just left it out (to be fair, there were practical considerations, too).

What does this mean? Well, the duckbilled platypus is a mammal, but it lays eggs. Animals that lay eggs are called “oviparous”. Most mammals gestate their young in the womb and are “viviparous.” Clearly it’s true that a duckbilled platypus isa mammal, but it’s not viviparous. If you wanted to model its reproductive cycle, how do you share that complex behavior without copying it? Are you going to use multiple inheritance? Maybe you have an EggLaying class (spiders use it too!). So you have:

  1. class DuckBilledPlatypus isa Mammal isa EggLaying { 
  2. ... 
  3. } 

Except … EggLaying is an adjective, not a noun. Why is that even a class? And should you inherit from that before or after the other class?

While the example above is silly, it shows a critical problem with inheritance: real-world problems don’t neatly fit the tree-like structure of inheritance.

But I hear you say “ah ha! That’s a straw man because you forgot delegation which can easily solve this problem!”

No, I didn’t forget delegation.

The Problem with Delegation

I like delegation and I use it, but it’s also abused. Here’s a real-world example.

A long time ago I worked for a company that needed to audit their objects to see who was changing the data. That was simple: they just had their objects inherit from their Audited class. One day they had an issue where they had a class which needed to be Audited, but only to write a line to a log file. It was inheriting from a class which did not have Audited in the inheritance hierarchy. They were forced to fall back on to multiple inheritance to implement this feature and it meant that the class in question also picked up some “extra” behaviors from Audited which it didn’t need and definitely didn’t want. Eventually it was pointed out that Audited should be something we should delegate to instead of inheriting from.

This solution was better, but had the problem that:

  1. You had to write extra scaffolding code to implement the delegation
  2. And why is a pure behavior, sans state, a class in the first place?

Think about that last point. It makes sense to have a particular instance of, say, a house, or an order, or a dog, but what is an instance of an “audited”? That question is completely non-sensical except in the tortured terms of OO modeling that we have today.

What’s the Root of the Problem?

Remember that we’ve been arguing about inheritance for fifty years. For anyone who disagrees with my points above, you can spend the next fifty years arguing with people who do agree with those points. Remember: fifty years of arguing. Even the most intransigent developer should pause at that.

The core of the problem with inheritance is that classes actually serve two purposes. First, they’re agents of responsibility. Classes should be responsible for the things they’re responsible for and as anyone working on large scale systems can tell you, classes take on more responsibility over time and this tends to increase class sizes.

Got that? Classes as agents of responsibility leads to larger class sizes.

However, when you inherit from a class, it’s an agent of code reuse. If my Dog inherits from Mammal, I don’t want to find out that someone’s implemented Mammal.play_mp3(filename). In fact, even if all of the Mammal methods are appropriate for that class, if I just want a GuardDog, there’s a lot of behavior in the Mammal class I probably don’t need or want for modeling.

Classes as agents of code reuse should prefer smaller class sizes.

So there’s a fundamental tension in classes when used with inheritance. The secret is to decouple class responsibility and behavior. This sounds odd, but it’s straightforward.

One way to “solve” this was interfaces, but that didn’t quite work because interfaces merely declared responsibility without behavior and many developers have moaned about duplicated code in Java apps due to heavy use of interfaces (Java is now allowing interfaces to also provide an implementation, but this is only a partial fix).

Delegation was interesting, but it often leads to plenty of classes you can instantiate which, nonetheless, should not be classes (see the Audited example above).

Mixins, which came from a Lisp variant named “Flavors”, were an interesting idea. They provide small bits of behavior you can “mix in” with any class you like. However, they don’t really offer any compositional safety and have the same encapsulation issues as inheritance, and similar issues to diamond inheritance.

Using Ruby as a more familiar example, if your Person class mixes in both your Serializable and Storable mixins and both of those have a .marshal() method, which one do you get? If it was multiple inheritance, you’d get the method from the first class you inherited from. With mixins, you get it from the last class you mixed in because Ruby implements mixins under the hood as single inheritance!

  1. irb(main):026:0> Person.ancestors 
  2. => [Person, Serializable, Storable, Object, Kernel] 

Surprise! You can also watch this part of a talk I gave about OO programming where I describe some more issues with Ruby mixins.

Aspect-Oriented Programming (AOP) has been another attempt to solve this and while it was novel, it had a core problem: instead of action at a distance being considered a bug, AOP made it a first-class concept! I still remember a friend debugging a threaded AspectJ application which randomly failed. Seems some developer renamed a method such that a pointcut no longer matched and as a result, the advice could not be applied to that method, meaning it wasn’t locked and different threads could access it at the same time.

If none of the AOP stuff that I wrote made any sense, let me paraphrase: AOP was an experiment that accidentally made its way into the wild and when found, should be beaten to death viciously with hammers. If you must use AOP, only apply advice to augment a program, not modify it. If the program won’t run correctly without Aspects, that should be considered a bug — that’s one of the reasons why aspects are often shown only used for logging or other trivial issues. (Side note: one of the original authors of the first AOP paper is a friend of mine. If you are he and you’re reading this, we’ll argue about it later).

The Solution

I strongly recommend Smalltalk-style traits. They were first described in Traits: Composable Units of Behavior and it’s the paper that most seemed to rely on when they implemented traits in other languages. Unfortunately, they should be relying on Traits: The Formal Model, a paper which clarifies many things (but not all) which are vague in the first paper.

In short, traits–or “roles” as they’re sometimes known–are composable units of behavior. They’re like mixins, but the methods are added directly to your class and if another method of the same type (name + signature, but depends on the language) is found, the code fails to compile unless you explicitly state which implementation you need. No more heuristic guesswork.

So the class states what responsibilities it has, but any behavior which must be shared (such as serialization) goes into a trait/role.

There’s a hell of a lot more than that, but I’ve routinely built large systems using only roles and no (or limited) inheritance and found the systems are more predictable, more flexible, and easier to understand. Further, you’ll find more and more languages are implementing them.

For a shorter description, I wrote a Five Minute Introduction to Smalltalk-style Traits. It’s rather limited, but it gives you the gist of the issue.

Update

Because I’ve been asked about this more than once: yes, I support using abstract base classes. In fact, that’s just about the only time I support inheritance. However, even then I try to avoid them if I can use a trait instead. Traits gain me the compositional safety that inheritance cannot.

View 26 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025