There is a point where performance optimizations get in the way of clarity, but that's after a long plateau where simple software == fast software. And simple software is the most amenable to changeability. It might not be the fastest way to initially write the software though, as leveraging existing large complex frameworks can give an head start to someone familiar with it.
Somewhere, somewhen, we, as software developers, started thinking that other programmers would rather extend code rather than modify it. This has led us to write code that tries to predict the use cases of future programmers and to pre-emptively include mechanisms for them to use or extend our code. And because it has seeped so deeply into our culture, if we don't do this -- engage in this theater -- we get called out for not being good software engineers.
Of course, the extra hooks we put in to allow re-use and extensibility usually results in code that is slower and more complex than the simple thing. Worse, very often, when a customer needs a new feature, the current extension hooks did not predict this use case and are useless, and so the code has to be modified anyway, but now it's made 10x more difficult because of the extra complexity and because we feel that we have to respect the original design and not rip out all the complexity.
I like Knuth's quote [1] on this subject:
> I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.
Writing generally "reusable code", aka a library, warrants a different approach to software development than application code in many areas.
1. Application code = Fast-changing, poorly specified code. You need to have a rapid development cycle that supports "discovering" what the customer wants along the way. Your #1 job is pleasing the customer, as quickly, and as reliably, as possible.
2. Library code = Slow-changing, highly specified code. You have a long, conservative development cycle. Your #1 job is supporting application programmers (the customers of your library).
> Somewhere, somewhen, we, as software developers, started thinking that other programmers would rather extend code rather than modify it.
That was when stuff like "proper testing" was deemed to be too expensive. It's unlikely to break existing workflows with extending something, but very easy to do so during a modification.
Companies used to have hordes of manual testers/QA staff, that all got replaced by automated tools of questionable utility and capability.
The tools are very useful, and they have well-known capability. That capability is strictly less than the capability of most manual testers / QA staff, but it's a lot faster at it, and gets much closer to being exhaustive.
Automation should mean you can do a better job, more efficiently, more easily. Unfortunately, ever since the Industrial Revolution, it seems to mean you can do a quicker job with less money spent on labour costs.
> That capability is strictly less than the capability of most manual testers / QA staff, but it's a lot faster at it, and gets much closer to being exhaustive.
That's if you put the effort in to write good tests. When I look at the state of gaming in general, it's ... pretty obvious that this hasn't worked out. Or the GTA Online JSON debacle - I'm dead sure that this was known internally for a long time, but no one dared to modify it.
And even then: an automated test can't spot other issues unrelated to the test that a human would spot immediately. Say, a CSS bug causes the logo to be displayed in grayscale. The developer who has accidentally placed the filter on all img elements writes a testcase that checks if an img element in content is rendered in greyscale, the tests pass, the branch gets merged without further human review... and boom.
Simplest is definitely not more amenable to changes.
We've just implemented a large feature where some devs tried to "hardcode" all the logic to apply a kind of rules-engine. I was horrified because the whole thing was coupled to the rules we currently needed, but we all know this is just the beginning and we plan to add more rules , and even allow custom rules to be defined by our customers. So, even though what they were trying to do is often lauded on HN and other forums because it's applying KISS and YAGNI, in this case adding a new rule would mean, basically, changing everything - the engine, the data that the engine persists, potentially the end result... everything! Now, perhaps this was indeed simpler though. However, it's the opposite of modifiable (and by the way, implementing it with abstract rules which store their own data, which the engine needs not know about, is actually much cleaner and the de-coupling just comes for free, almost).
This doesn't sound like a simple solution from your fellow devs. It appears to have been an easy solution. If the distinction isn't familiar to you, there is a great talk of Rich Hickey [0], that explains that distinction and some more. The talk is definitely not only applicable to Clojure.
YAGNI is a great slogan, but must not be a dogma. If you already know, that you are going to need custom rules. Then prepare for it. But, if --- for example --- the current rules are almost trivial, and you don't know yet, which rule engine will be the right fit later on. Then it might be a good idea to postpone the decision about the rule engine, until you know more. In the meantime a hard-coded solution could be enough.
I know that talk very well. And I still disagree with you that hardcoding the rules is not simple, just easy. It's always going to be simpler to implement code that's more specific (i.e. has less room for the unknown) than less. Or do you think there is anything Rich said that shows that not to be always true?
Rich didn't say it, if I remember correctly. But there are problems where a more general algorithm is simpler than a more specific straight-forward algorithm. Usually because you change the way you model the problem.
Otherwise, I have to take your word for it, because I cannot see your specific example.
YAGNI definitely doesn't apply in cases where you do actually know that you are gonna need it.
If the task is "build a rules engine with X, Y, and Z rules, that can add A, B, and C rules next year" then delivering hardcoded XYZ rules engine is an absolute failure to engineer and a very braindead use of "YAGNI"
reply