> Frustrating enough since it feels like in JavaScript, you never have control over what’s going on.
> In C, you can go ahead and misconfigure a makefile to output debug files, a segfault, or a bad reference, but at the end of the day, it’s your fault.
> The same goes for any serious language ecosystem out there.
> But JavaScript is quite the opposite: it works like magic (and not as a compliment!) even though you know it’s pure software!
I think we can recognize from these comments what is happening here. If you think that the tools you are working with are magic (whether because of bias, unfamiliarity or whatever), it will be much harder to pin down problems. If you understand that you're dealing with very complex, but deterministic systems, you will be able to work systematically, observe things carefully and carry out experiments to check your hypotheses, and you have a chance of actually discovering what's going on.
While I agree with everything you have said, and it's been my motto since I read, "I'm sure you're joking, Mr. Feynman." The point of the post was to point out (maybe I did it wrongly) that those things you have described are pretty impossible given the ecosystem we have nowadays in Javascript (as a whole!). Of course, you can try to theorize and try to get a hypothesis on things and then try to prove those, but it's just too hard to keep track of everything.
Another random story from this week: we have been using `nanoid` to generate IDs internally for an internal tool. Apparently, they did a breaking change release, and now you cannot use it anymore on commonJS env. See my point? where's the hypothesis we can make? It's hours and hours spent tinkering with code and other people's dependencies, and if, as you said, put a theory that theory won't match if you think how real systems work (the networking section on the blog post)
> I deleted all cache on my local environment, all node_modules references, all lock files.
Were the lock files not checked into version control? It sounds like the problem was that different environments had different versions of dependencies, which is what lock files are meant to solve.
It partially helps but in the article I mention the package it had a change, a minor (non-breaking) one, that caused A to affect B without us having a direct explanation as to why.
Locking helps on packaging, it reduces the problem of packages being updated without your control but it doesn't help interdependency or the other things I mentioned (not even talking about security here)
Summary: the author updated a library without noticing and/or without knowing what version they were using before, because they aren't using lock files.
The library that broke was something using the AWS SDK. The library stopped working with an updated version of that AWS SDK.
They experimented with different package versions and combinations of service instances to fix the issue, because they didn't know what version of the package they were using in production before the package broke for them. They got an error message about a missing function in the updated library related to streams. That led to some false guesses about the source of the error.
The conclusion is that JS is not a serious programming language like Rust or C#.
Seems like they have complex CI/CD in place, but don't conclude that they should be pushing their package lock files.
And I don't understand at all how using another programming language would have prevented this issue.
Thanks for reading it! It's the first one to get comments on my blog lol.
Yes we did use lockfiles, all over the place. However each service had a different (and mandatory) security update that modified the libraries with a non-breaking change.
From now on I will try to establish a fixed policy although given the frequency of updates in the ecosystem it will become deprecated rather quickly
To me this sounds more like a "microservices nightmare"?
What do you mean by "the service had a mandatory security update" (paraphrased)?
I'm guessing you are talking about updating packages because of warnings from "npm audit"? Or do you mean an update of some top-level package? Container image??
Anyway, update is update.
And Software tied to cloud services can always be brittle. Why no complaint about Amazon breaking semver (if the mentioned missing function was part of the public API), or the package that was breaking (if it was not)?
Because your company doesn't do the same security update on testing server, thus creating difference between two services.
Npm is highly susceptible on this if you're using uncommon libraries, especially under 1 mil downloaded libraries. However the same can applies to any other toolings or languages that you use.
As I already answered, if you are using packages that break sem-ver (it can happen very easily), or packages that depend ob unsprcified behavior of another package at a specifiv version, none of that is caused by the package manager or programming language.
I love me some JS hate from an informed IT expert, but sorry, as politely as I can put it, blaming common targets of unpopularity (JS) for your personal and/or organizational issues (not understanding dependency management) seems very unprofessional to me.
The word "mandatory" is a popular slippery slope, but from your description, I can at best guess what you meant and you haven't elaborated on it.
Dependencies are not magic, they are other peoples code. If you blindly update without understanding how anything works, you are in for pain.
This could have been written about any language, even "serious" ones. Yes, JavaScript the language and ecosystem have many issues, some of which are fundamental flaws. But the kind of nightmare the author experienced can happen in any project where you're pulling in dependencies, software that other people wrote.
Nothing can be 100% reliable and stable when your house is built from parts which other people are changing all the time. What helps: testing, locking to specific versions, reducing your dependencies. But that's not specific to JavaScript, that's pretty much the entire house of cards which we call software development.
Edit: Somewhere else I saw the phrase, "cultural problem". That describes the situation with JavaScript and the reason for the author's hellish journey debugging an issue introduced by indirectly updating an obscure dependency. But culture can be improved, and it has been evolving to be more serious about reliability.
Will we ever awake from the nightmare of history? I mean, JavaScript? No, it's part of our legacy now: as the article described, JavaScript code is running all over the globe, including enterprise-scale, important and serious things. We have to work from within the system to reform it. And probably gradually replace it with WASM, for better or worse.
Well "this.subset.encodeStream is not a function" can't really happen in a language with a robust type system. Or at least you would have to try really, really hard, to use reflection, or to somehow load a different version of library at runtime than on compile time. JS raises brittleness to whole other level.
This could have happened with any language, Rust, Go, C# ... JavaScript is getting your beating but is not to blame for issues with out-of-sync packages.
> In C, you can go ahead and misconfigure a makefile to output debug files, a segfault, or a bad reference, but at the end of the day, it’s your fault.
> The same goes for any serious language ecosystem out there.
> But JavaScript is quite the opposite: it works like magic (and not as a compliment!) even though you know it’s pure software!
I think we can recognize from these comments what is happening here. If you think that the tools you are working with are magic (whether because of bias, unfamiliarity or whatever), it will be much harder to pin down problems. If you understand that you're dealing with very complex, but deterministic systems, you will be able to work systematically, observe things carefully and carry out experiments to check your hypotheses, and you have a chance of actually discovering what's going on.
reply