Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

120 is not that high. If you have fine-granular modules for re-use across multiple product lines, you need lots of projects. Yes, one can create several solutions for each line, but many times one needs to load all the projects especially feature teams that touch a little-bit everywhere.


sort by: page size:

How many projects do you consider to be a 'great number'? I ask because I do development in VS with solutions that contain around 300 projects and don't experience anything close to what I would consider 'really sludgy'.

My main project has ~30. But still I prefer to go one at a time, especially with a major version bump, for the sake of understanding exactly what breaks. Perhaps it's just a personal preference.

For personal projects it might be overkill depending on the scope. But if 2+ people are working on a constantly evolving system, shipping new features, and want consistency in the design system? It’s a solid foundation.

When over a thousand people work on a single project and tens to hundreds on a single submodule, there can be no real "personal taste" entering the equation or there will be trouble.

This is only true for trivially small products. The pipeline complexity for multi-team monoliths can consume an entire dev team to maintain.

So it's not really 400 projects, it's maybe 20 projects optimized in 20 different ways each.

That's a whole different way to look at it.


I agree but like everything with optimization and refactoring, the cost to move up towards a 100 offers less ROI in the end. I think if you're at 90 that's solid, no desire to be in the 100 club nor would I tell a client you need to be there.

I also disagree.

I agree that pretending you have multiple clients sets up a more difficult bar to meet. I agree that bar might be overkill for some projects.

The idea that it is overkill for all projects is a leap I can't follow. Optimizing for greenfield development speed is something inexperienced devs often do, and that's what this feels like.


Nice work! This looks very promising. The only thing I would add is the difference in packages seems a bit weird:

- Five seems low for projects in the first tier

- Projects are unlimited in the second tier, but environments are not? Is that a technical limitation?


Tens to low hundreds possibly, but micro services can make things much worse as you scale to thousands of developers. The ultimate limit of any design is how much any one person can understand both from both a complexity standpoint and a rate off change standpoint. It’s the same issue that pushed people from goto > functions > libraries > ... Eventually you need another layer of abstraction.

For very large companies doing mergers etc things are always going to be in flux in ways that the startup world tends to ignore.


Heavy development processes don't scale when it comes to volume of projects.

Yeah that sounds like a good choice for managing complexity.

Probably a trade-off with cost I'm guessing. For my project I was trying to get the cost as low as possible, since it's just a side project for me.

Interesting point about the open issues, I didn't notice that, but does seem like it could be an issue for the longevity of the project.


For internal business apps - or even b2b tools - 10 is probably plenty. There's a lot of premature optimization around.

We had ~150 projects across about 1M LOC and 4 developers because they were being used like folders - they weren't really separate codebases, just a means of organising the code. We cut that down for a drastic improvement in speed.

From experience, the factor is more like 5-10. Depending on the complexity of the existing solution, and depending on the wealth of custom implementations, of course.

For sure. Next after selecting a low code monolithic stack we are informed that we can reuse low code solution, which is like a doubling down on the monolithic approach. But of course with 30 consultants.

> Scalability is often at odds with peak effectiveness

This is the astute point of this article. It transcends the domain at hand: interchangeable software developers. It basically hits any professional talent pool. It even applies to various tools, machines, products we use in every day life.

The question one has to ask is what price one is willing to pay for scalability. And whether you’re paying for something you really need or not.


> The way it usually works, if 1 of your 100 modules needs to be scaled, it probably means the overall monolith only needs a small amount of extra resources, since that module is only using 1% of the total. So it's not like you need to double the instances of the whole thing.

I could be wrong but doesn't monolith usually refer to one really heavy app? As soon as you go from needing 1 instance to 2 (because 1 of the 100 inner "modules" needs to be scaled), I would guess most of the time there's a high expense (lots of memory) in duplicating the entire monolith? Unless you can scale threads or something instead of the entire process... Or if the entire process doesn't initiate 100 modules as soon as it is started (although I imagine it would in most cases)


But this is a problem at large scale. Each language and stack carries a variety of overhead. The more of them you have, the higher the overhead is.

Say you run a factory, and everyone needs a hammer. When there's only two teams, two kinds of hammer is fine. But fast forward to 10 teams. Can you always get all of those hammers when you need them? What if team #3 needs extra support? You need to find a new engineer who knows hammer #3, or take an engineer off of team #6 and train them on how to use hammer #3. And if all the hammers need a slight modification specific to that company, you need to make 10 different modifications. And how will all the teams benefit from shared hammer tips-n-tricks if they all use different hammers? There's a lot more overhead than you think at first.

next

Legal | privacy