Nonetheless I have run into the diamond dependency problem in practice. Their solution is unprincipled and there's no way to turn off the default behavior.
Part of the problem is that it overrides extremely low-level things like cd. That can get in your way pretty badly if there's a bug, or just a feature you don't understand.
It is bad, it is required for performance reasons.
The questions is what could be the solution going forward, which is going to be a huge change anyway. I do not see a way out of this with our current architectures.
Dependency controls have improved quite a bit. They went through a couple of variations of this and the current solution is nice.
I haven't ever run into an issue with artifact hand off to this point though. Maybe it's one of the more rare concerns, but it's not something I've experienced (fortunately). I imagine it would be a concern to debug though.
We maintain a lightweight platform alongside a legacy platform that is an untestable ball of mud. This resulted from rapid implementation of business demands over the years, along with a practice of addressing numerous data quality problem through hacks in components, rather than cleaning the data or using a sanitization layer to access it.
Lack of dependency inversion in systems like this is truly the road to ruin, as the system complexity grows so quickly in time, due to interactions, that eventually nobody really understands what it does or why it does it.
I suppose this is tolerable for some applications, but if you need to numerically validate it, it's a nightmare.
The concerns you're talking about here were primarily deficiencies in tooling, not an inherent problem with the PVP. Many of those deficiencies have been fixed (for example, --allow-newer) or there is work in progress that will fix them (cabal new-build, cabal.project files, etc).
They are broken, unreliable, hard to correctly setup guard rails.
I mean like the article mentioned they could have set the instances and concurrency settings to lower values. Which in this case would have worked.
But finding the right settings to balance intentional auto-scaling and limiting auto-scaling to limit of how fast unexpected cost might rise is hard and prone to get wrong.
Let's be honest it's in the end a very flawed workaround which maybe might help (if you know about it, and did it right).
Such a system sounds nice in theory. In practice it leads to a byzantine nightmare of configurable interfaces, support systems, and so on, with a result that is ultimately fatter and harder to maintain.
I tried to remove the expectation that supercomputer applications would load their libraries dynamically and that became one of the single most hated aspects of that product line
people really hate having their expectations violated - even if its an expectation of a fork in the eye
I was referring to the tedious wait for the configure script. It might not seem so bad now, but what's a mild annoyance on modern computer with an SSD is quite the interruption on a few hundred MHz and spinning rust.
Another thing I wish it would do is give you all of the missing dependencies up front. Aain, not so bad now, but it was a major hassle for me when a missing dependency meant a long walk to the local library, 4gb flash drive in hand.
Nonetheless I have run into the diamond dependency problem in practice. Their solution is unprincipled and there's no way to turn off the default behavior.
reply