If you change the word "arbitrary" to "necessary" (implying a different bias than the one you went with) then all of a sudden this attitude sounds less helpful.
Similarly "easy to limp along with a bad architecture" could be re-written as "easy to work with the existing architecture".
At the end of the day, it's about getting work done, not making decisions that are the most "pure".
You have to balance getting work done vs. purity, and Microsoft has spent years trying to fix a bad balance.
Windows ME/Vista/8 were terrible and widely hated pieces of software because of "getting things done" instead of making good decisions. They made billions of dollars doing it, don't get me wrong, but they've also lost a lot of market share too and have been piling on bad sentiment for years. They've been pivoting and it has nothing to do with "getting work done" but by going back and making better decisions.
Those releases (well, Vista and 8 anyway, I don't know about ME) came out of a long and slow planning process - if they made bad decisions I don't think it was about not taking long enough to make them.
Arguing about purity is only pointless and sanctimonious if the water isn't contaminated. Being unable to break a several hundred megabyte codebase into modules isn't a "tap water vs bottled" purity argument, it's a "lets not all die of cholera" purity argument.
> At the end of the day, it's about getting work done, not making decisions that are the most "pure".
This attitude will lead to a total breakdown of the development process over the long term. You are privileging Work Done At The End Of The Day over everything else.
You need to consider work done at every relevant time scale.
How much can you get done today?
How much can you get done this month?
How much can you get done in 5 years?
Ignore any of these questions at your peril. I fundamentally agree with you about purity though. I'm not sure what in my piece made you think I think Purity Uber Alles is the right way to go.
Then I'll point to the wide success of monolithic utilities such as systemd as evidence that consolidating typically helps long term.
Which is to say, not shockingly, it is typically a tradeoff debate where there is no toggle between good and bad. Just a curve that constantly jumps back and forth between good and bad based on many many variables.
systemd is also completely useless on its own. It still needs a bootloader, a kernel, and user-space programs to run.
When it comes to process managers, there is obviously disagreement about how complex they should be, but systemd is still a system to manage and collect info about processes.
The hierarchical merging workflow used by the Linux kernel does mean that there's more friction for wide-ranging, across-the-whole-tree changes than changes isolated to one subsystem.
Isolated changes will always be easier than cross cutting ones. The question really comes down to whether or not you have successfully removed cross cutting changes. If you have, then more isolation almost certainly helps. If you were wrong, and you have a cross cutting change you want to push, excessive isolation (with repos, build systems, languages, whatever), adds to the work. Which typically increases the odds of failure.
If you change the word "arbitrary" to "necessary" (implying a different bias than the one you went with) then all of a sudden this attitude sounds less helpful.
Similarly "easy to limp along with a bad architecture" could be re-written as "easy to work with the existing architecture".
At the end of the day, it's about getting work done, not making decisions that are the most "pure".
reply