> The complexity only changes form, so instead of tracing the flow through an inheritance hierarchy you're just doing it through chains of forwarding methods. It's for this same reason I don't believe so much his argument for readability and short classes - breaking everything up does not make things simpler, it makes the complexity spread out over a larger area; while it may be true that it is easier to understand an individual piece, it becomes more difficult to understand the system as a whole.
Preach it! Whilst immense monolithic classes are bad, smashing a system up into a million tiny bits is just as much of a barrier to understanding. It is baffling to me that this is not immediately obvious to everyone.
> He's established both things. First, that class hierarchies are bad when they break encapsulation (he calls this the engineering problem with them).
His exact quote was "Class hierarchies create brittle programs that are difficult to modify."
But he has not demonstrated the first part of that except in one specific case. That case may (?) be common, but it not inherent in the problem - class hierarchies do not require breaking encapsulation.
His conclusion is not supported by his argument because his argument applies only the the strawman he created, and not to the general case to which his conclusion refers.
>>I actually very rarely create class hierarchies in C++.
The book I learned C++ from must have weighed 5 pounds, and about 4 of them were devoted to inheritance. I assumed that inheritance was the central point of OOP. I never quite finished that C++ book; the details became overwhelmingly tedious, and I'd already learned to write programs. When the Gang of Four book came out, and in its opening pages declared that inheritance was a bad practice, and you should prefer composition wherever possible, I kind of laughed, thinking it had finally dawned on people that this was a bad idea, and OOP would go away soon. I could not have been more wrong about that.
I still don't quite get how a person can be for OOP and profess a dislike for class inheritance; or believe software engineering metrics that consider things like call-tree depth and class counts indicators of excessive complexity and still believe objects are the right model. It seems like all the features of OOP are tied up with inheritance, and all of the recommended practices around classes and objects imply convoluted code paths and architectural complexity by their very nature. OOP seems to thrive on contradictions.
> This whole system only builds one type of object. He thinks that his solution is better because it’s more extensible.
> I can’t make him see why it’s bad.
Schools teach OOP as though adding new types of objects is the norm—like every type of software construct is actually a GUI widget in disguise and we're going to be adding new interoperable subclasses every other week, so we may as well get the infrastructure set up to make that easy.
In most real world applications, the only type of object that behaves that way is, well, GUI widgets. Nearly every other type of construct in a typical system will have at most one implementation at a time (possibly two during a gradual transition). Factories, builders, and the whole design pattern menagerie aren't particularly useful when for the bulk of a system's life they're all just proxies to a single constructor.
I don't know if there's a good way to teach this out of someone besides just letting them experience it, but that's the insight that he needs—different types of code have different maintenance characteristics, and the tools he's been given were developed for a very specific type of code and don't apply here.
> If some principle results in a handful of single-method classes that don't really do anything on their own, the principle is not a good basis for design.
Absolutely agree. Proponents of approaches like this tend to only worry about intra-object complexity, and ignore the fact that a vast, complicated object graph is also hard to reason about.
> OOP is widely-used and easily comprehended because it is a fairly simple way of modeling reality that is compatible with how human beings do it.
Have we not learn by now that these systems are not easy to reason about. Are not all the things one first learns (ie Animal -> Dog) bullshit and should be avoided.
Why is it in every good OO book that, composition is better then inheritance. Why is every OO book full of examples about how to avoid mutabiltiy and make the system easy to reason about?
The idea that OOP systems (as generally) thougth of goes completly out of the window as soon as you have any kind of concurency, even just event handling.
> which rejects OOP
It does not reject, it takes the usful features like polymorpism and gives them to you. Protocols are better then interfaces, better then duck typing.
> In Clojure, if I want to define a symbol there are nine different ways of doing so.
There are a lot more then nine. But I would recomend rich or stus talks on simple vs easy. Just saying there is nine of something and thus its complicated is idiotic.
Java has only one thing, classes, does that make it simply, or does that just mean that its hoplessly overloaded?
Clojure is extreamly simply. State can only live in a var, atom, ref or agent. Every one of these has clear semantics, this includes clear sematnics in a multithreaded world. No other language has such clearly defined state management.
> Clojure claims to include these language features as a way to mitigate the complexity of parallelism; frankly, I’ve never found threading or interprocess communication to be any sort of conceptual bottleneck while working on some fairly complex distributed systems in Python.
Distributed system != Shared Memory
Nobody, really nobody can say taht distributed systems are easy. Just listen to the people that implment this stuff. But it is clear that a language generally does not really help you with reasoning about that system.
However when you run on a 16 core with shared memory and you have to do lock ordering and all this stuff,then you will defently be happy for the tools that clojure provides.
> Less is more (as long as “less” is sufficiently convenient).
Clojure is actually a much smaller and much simpler langauge then python every can hope to be. Clojure is simple, and strives for simplicity in every feature of the langauge. See here:
> I have always struggled with OOP
Said every one, I think this is the main reason.
In theory OOP is supposed to make it easier to do separation of concerns in reality separation of concerns is really hard and adding some syntax sugar didn't help.
It's hard to create an abstraction that isn't overboard or so specific it's not an abstraction.
It's hard to separate concerns if you just inherit those concerns.
Testing OOP inheritance is awkward so most people don't and use dependancy injected instead which kinda breaks the whole point of inheritance and OOP.
OOP makes it look like you have separated concerns but in reality you where concerned with the wrong thing (most likely nouns) and spread the real concern throughout the whole project.
I think the old school OOP of Classes is dead and the newer Traits and Object composition will take over, or functional programming which is very Gucci at the moment.
> The other complex part is the lack of inheritance and the trait system, which is also foreign for many
I agree that the trait system is less familiar to most programmers than inheritance, but I'm not convinced it's more complex; inheritance itself isn't really something I'd describe as simple, just something that people are more familiar with.
> You could just as well argue that laying out object class hierarchies and using inheritance is "[solving problems] that have nothing to do with what you’re actually trying to achieve."
Could, indeed semi-regularly do. I actively dislike solving problems with inheritance.
I also actively dislike people missing the point. The problem is not "using programming to solve problems". The problem is letting the novelty of the programming you're using to solve problems con you into thinking you're doing something more clever than you actually are.
>> > And even in an OO language, no decent practitioner writes one-class-per-object.
> But that's the stereotypical example of OO design. You have duck->paint(), duck->quack(), duck->plunge() all in one class (file) and of course the dependency mess and the scattering of aspects throughout the project.
So much nonsense here. One class per object is absolutely not the stereotypical example of OO design. class != file. And dependency management is usually a problem because junior devs pull in a billion half-baked libraries to solve a problem--it's not an inherent problem with OO and it's certainly not a problem with trying to model the real world.
I'm not even particularly in love with OO. I particularly think that functional paradigms often do a better job of modeling the real world. What I'm really disagreeing with is the claim that modeling the real world is a bad practice.
> And even if you make more classes, so that your design is more like one class per concept/aspect, I think the criticism of Mr. Acton is: if you have many instances of a given class, then there must be a better way than calling a method on each individual instance.
If that was Acton's criticism, then he should have said that instead of saying that "code should be designed around a model of the world" is a lie. Particularly since if you're acting on a large list of objects, then representing it as if you're going through and then each object is acting is a pretty bad representation of reality.
> In other words, the idea is that classes are fine (they promote modularization), but there shouldn't be more than one instance of each class.
Now you're just confused. Acton specifically was criticizing one instance per class in the section I quoted, and now you're saying that's what he's supporting?
And for the record, if there's only one instance of your class, you didn't need a class.
> So his frustration here is that there's tons of educational material that focuses on inheritance even when that's not what most experienced programmers are doing.
His frustration is pretty clearly that OOP, as a whole, whether by composition or inheritance, is "a load of horseshit" in his words.
> trying to make every method 1 line long and every class completely contained creates huge nested layers of abstractions that are often difficult to follow and hard to reason
I agree a lot with this, and have worked on such codebases. It becomes very difficult to tell whats actually going on, or how. or why! I think like everything else, it's subject to abuse and misuse.
> It turns out that our knowledge of the behaviour of non-trivial domains (like zoology or banking) does not classify into a nice tree, it forms a directed acyclic graph. Or if we are to stay in the metaphor, it’s a thicket.
That completely chimes with my experience. I wondered if I was doing OOP wrong, as any time the size/complexity of a project (or module) gets above a certain level, a Thicket results in my code.
> Classes are the wrong semantic model, and the wisdom of fifty years of experience with them is that there are better ways to compose programs.
Where's your source/links? What? This needs expanding - if there are better ways, outline your evidence and show us where we're going wrong :)
> We used to love object oriented modeling with inheritance hierarchies, until the point we realized how it was so hard to have any hierarchy perfectly match a given domain.
You're describing a (for the lack of a better term) "fad cycle". Our industry, like all, goes through cycles where a given practice is declares holy and sacrosanct, doctrine evolves around it, eventually flaws in the idealistic view appear and because the entire foundation before was "this is flawless and essential" and turns out it's not flawless and essential, the practice is wholly rejected. It's swinging to extremes. Neither of which is useful.
Experienced developers neither considered inheritance crucial property of OOP, nor they avoid it completely now.
Inheritance is static decoration. Decoration is a form of composition. It's all forms of the same thing, where you can make some choices AOT and some choices JIT and you pay for AOT vs JIT in terms of ending up with a different performance/flexibility balance.
> One could argue that the complexity of the system where code and data are mixed is due to a bad design and data an experienced OO developer would have designed a simpler system, leveraging smart design patterns.
Indeed he (or she) would have made use of traits/protocols/categories/whatever, to separate behavior from data, while keeping the design extensible (via polymorphism).
This is something I usually find in OOP critics, too much focus on class driven implementations, without spending too much on the other parts of the toolbox.
> People who really though it through, and tried different approaches are the ones that hate it and are vocal about it. They see through the folly. They can tell how core principles of OOP are bad ideas. How encapsulation at object level is usually pointless and costly, how inheritance doesn't make sense, and so an so.
You seem confident in this position - can you provide some resources for others to become more familiar with that philosophy, proofs against OOP, and/or alternative approaches to architecture?
>>I am not an OOP convert. I've spent most of my professional life working with classes and objects, but the more time I spend, the less I understand why you'd want to combine code and data so rigidly.
He doesn't want to combine code and data rigidly, so he uses a language without any concept of generics.
I think he doth protest too much, he can write code however he feels like, and Im not going to deny his personal opinions, but the justifications and reasons for those opinions are all just uniformly nonsense.
>> Prototypes are flexible. They don’t have all the ceremony that’s behind
class-based OOP. In that sense, they’re fun to use. But they don’t scale well
for larger applications and that’s why people jump straight to classes.
Yeah, well, unfortunately class-based OOP (as the article puts it) also doesn't
scale that well, as anyone would know who's had to work on a 10+ years old
codebase written in Java or C# (the typical big-OOP languages). That's why you
can google 'dependency injection' and get some 2,180,000 results. Because even
with all the big-OOP you can get, scaling up and maintaining a project long-term
is pain.
Btw, I've mostly worked with C# and it's not that I like javascript particularly (I
mostly hate it and left to my own devices I'd use Prolog everywhere) but the whole
OOP thing just sucks so badly. There must have been a better way to use Dr Minsky's
great idea [1], what we got now is no good.
> Inheritance forces you to define the world in terms of strict tree hierarchies,
No, it doesn’t.
Inheritance is the outcome of deciding to model some part of the problem space with a tree hierarchy (that potentially intersects other such heirarchies). It doesn’t force you to do anything.
I suppose if there was a methodology which forced you, as the only modeling method, to exclusively use single inheritance, that would force you to do what you describe, but…that’s not inheritance, that’s a bunch of weird things layered on top of it.
Preach it! Whilst immense monolithic classes are bad, smashing a system up into a million tiny bits is just as much of a barrier to understanding. It is baffling to me that this is not immediately obvious to everyone.
See also the microservices movement!
reply