That is how you think about the software you write. System evolution is only one aspect. Most patterned OO codebases I have come across were *not* engineered for evolution. Sure there were some classes you could implement or replace, but the complexity was not paid back later.
Design principles can be applied to all implementation mechanisms.
My experience is the opposite. The more architects there are and the farther they are from the nitty gritty coding the worse everything is. Setting common rules is useless unless they're being enforced. And to be enforced you need to actually be hands on in the code base.
>Senior Engineer at interview during a Systems Design
I also pretty strongly disagree with this. The right answer to systems design in 95%+ of cases is a single application modularized using language tools talking to a single DB. Most of us don't deal with the scale that requires a constellation of systems.
It's not exclusive to inheritance, though. There are lots of times that the original principal decides that he could have done it better and winds up in exactly the same place. It could be progress, too, if it trades on set of failure modes for another that is less severe or less frequent.
This is why a team needs access to good architect who's seen the paradigms shift, or even cycle. You're almost never starting from scratch, so you really need someone who's able to incorporate better or more suitable tech without throwing out the baby.
If you're microservices-based, that last part is easier, even if it falls into one of the described pitfalls, e.g., system-of-systems.
This is required reading for all software developers. Any successful software system will end up as a Big Ball of Mud or other anti-architecture eventually. It's just the nature of the beast. And while we should fight the entropy, it will occur despite our efforts (there might be a few exceptions in the world, but for the most part it is inevitable)
In order for software to be successful it needs to solve a problem. In order for that software to be successful in the long term it will have to change and adapt to continue solving the problem. Architecture usually means abstraction. And abstraction is most always a tradeoff - you sacrifice flexibility to make something common, easier. But there is the rub. A successfully abstracted architecture that solves the problem today, will not be able to solve the problem tomorrow without big changes!
But there’s a fine line between implementing things but the difficulty being understanding long term vision and making sure short term improvements don’t actively work against the ‘ideal’. Kind of hard for newer programmers to get a good sense of system design.
Legacy code is reduced to a fraction with good architecture, though. Well architected programs are a lot shorter and a lot more straightforward. And I don't really think you can make clear rules or exceptions - so no general discussions needed. Just don't admit data flow designs that aren't completely thought through (and really simple). Yes, it takes experienced developers.
There's a rule (I forget who pointed it out first, I heard it in 1994 or so) that the structure of a software system will exactly mirror the structure of the organization that created it.
Adoption is a big issue with design systems. More complex it is, the less adoption you're going to see (or incorrect usage).
It's amusing to see deeply nested and abstracted components with fancy properties in a design system. Makes you wonder who they're building the system for.
If you look at mature publicly available design systems, they have all evolved over the years towards more simplicity and flatter structure. Google, Atlassian, etc.
This is a false dichotomy. On one end, you have "overarchitects everything so much that the code is soon unmaintainable" and on the other end you have "architects the code so little that the code is soon unmaintainable".
Always write the simplest thing you can, but no simpler. Finding that line is where all the art is.
Having once been a platonist about this sort of stuff, I've rarely seen heavy code architecture work out well in practice (not so with system architecture.)
I'm reminded of the various Evolution Of a Programmer jokes which end with the master programmer writing the same code as the beginner programmer.
I think the flaw here is that what works today may not work tomorrow and in many situations, it is easy to see where those breaking points will arise.
Take your team sitting in one room building a monolith. Unless the software is intended to always be smaller and targeted, or the software fails in the market, that team will eventually scale and no longer fit in this room, and now you've got a monolithic application that can't be effectively maintained and enhanced by the organization.
There is value in designing your system for the organization that will own it, but there is also a value in designing your organization for the system you wish to develop.
Experience and published research have shown time and time again that whey you try to fight Conway's law, you will lose, so one or the other will have to give. For long-lived applications, you'll find more success if you can alter the organizational structure to fit the system.
Not building one's self into a corner is one of the hardest things to avoid.
No design is complete in the start. It needs to find a balance between being a foundation for exploring and developing through the problem domain, but remaining flexible. I think a lot about managing and mitigating risks through clever and simply designed architecture that can do a lot. The less there is to the design the more it can remain flexible to deal with new facts.
The fine balance of over architecting/abstracting vs under abstracting/architecting is for me a matter of finding the balance between the size, impact and possible number of unknowns about what I'm trying to design when I begin vs the future. Where possible, understanding the domain before daring to design or interpret is often the most important factor.
For the few large enterprise systems I've designed in unrelated fields in the past 10 years, I haven't ended up throwing away or majorly refactoring the schema or design. What did change is the stack and tools that became available and making sure I kept a laser focus on an n-tier architecture so different layers of the presentation, business logic and app logic could be swapped in and out over it's lifetime. I guess I can say comfortably that risk of designs falling apart was minimized, and luck has increased because I didn't begin without an understanding of the core principles, entities, and interactions ultimately the details and workflow the system has to manage.
I can't stress the importance of keeping things as simple as possible, because complexities are guaranteed to arise in the future.
To me, features, and schema changes over time can become like perpetually flying bullets.
I absolutely have refactored or adjusted as I go? Sure. Does all of it get thrown away? Rarely. Have I over-architected a solution to cover my butt and the new facts never came, totally. Has a new tech, stack or library come out that made me need to replace what I had built or modified to do what I need, always.
I do lots of paper models to try and see the core patterns, and what could be interpreted in a number of different ways should something grow in that direction. This type of strategy is far more difficult when working in a problem space where there is a lot of research and development.
How about you? Would love to hear how you approach things that have different balances of known/unknown facts.
I'm giving interviews with system design, I'd say that the majority of programmers who are decent cannot actually architecture anything well enough. And it doesn't matter if they're given more time, they don't have the training/experience for it. (They'll only learn the impact of their decisions later, eventually painfully).
On the other hand, they may be able to develop it if handed and led through a proper architecture.
Good design evolves from knowing the problem space.
Until you've explored it you don't know it.
I've seen some really good systems that have been built in one shot. They were all ground up rewrites of other very well known but fatally flawed systems.
And even then, within them, much of the architecture had to be reworked or also had some other trade off that had to be made.
Depends on which meaure of good in use obviously, but I have never seen anyone capable of being efficient over significant time spans without in some way having design and architecture be guided by principles that balance cost of change, cost of implementation, and difficulty of understanding.
Ignoring cost of change can be absolutely devastating in the long run, but believing you can somehow scry the future for all possible extension points necessary is equally so.
Designing for (anticipated) change in my mind is not at all about adding hooks, API's, or any code at all for that matter.
I think of it more like making sure your system respect analogues to gravity and the other natural laws as best translated into the more abstract relations relevant for the particular kind of system you are building.
One could call it programmer and refactoring friendly design and not miss the target by much.
reply