That's what amenod was referring to. The current versioning model is different than semantic versioning model that everyone is used to.
Personally I really hate how everyone (especially web browsers) blindly copies what Google comes up with, without putting much thought into it, whether it is UI changes, versioning or behavior changes.
True, but if it's my understanding that Google doesn't pursue semantic versioning, at least with C++, partly because it's hard for a human being to understand what C++ changes affect an ABI and which ones don't.
> Semantic versioning has been around for quite some time. I used it when I did Java and C# development; is it really that weird? Is there something about it that's weird?
The concept only makes sense in the context of APIs. I don't know if that's obvious; I've seen people use semantic versioning with software that didn't have public interfaces.
Even if you go through a deprecation cycle, you're still going to eventually have a build N with feature X, and build N+1 without feature X. That's a breaking change.
Semantic versioning means less for a product like Firefox or Chrome than for a library like Underscore that thousands of other pieces of software rely on.
Suppose we replaced Semantic Versioning with any other approach (of your choice) that still disclosed the same amount of information about whether API was identical, had an additive change, or had a breaking change. Would you object to that model too ?
If so, it seems like you'd be effectively arguing that authors should simply not disclose important changes to the API. It is hard to imagine that position being defensible.
If not, why do you consider Semantic Versioning to be more hazardous?
It used to work that way, though; software in the 90s and early 2000s did have sane semantic versioning, updates were backported to earlier major versions, and revision updates generally didn't break anything in production.
I bet semantic versioning is mostly championed by older veterans who have worked in maintenance or systems administration roles and are trying to get the industry to return to a versioning system that made sense once upon a time.
For my part ... while I'd dearly love to see semantic versioning come back, along with a few other good habits from a long time ago, I've given up on that and now just accept that updates will break things sometimes and there's always something to update and so something will be broken most of the time and that's just how it is so charge accordingly.
I think versioning API is an illusion.
It is all source of complexity that eventually kills us.
Give up keeping exactly the same behavior as actual models change.
IMO, semantic versioning (at least the x.y part) makes sense for libraries but not for applications but this distinction is rarely made and sometimes blurry.
They changed to semantic versioning. From what I remember they intended to keep their development style of breaking backwards compatibility where convenient, though. The consequence is that you’ll regularly see major version bumps.
I am pretty sure this is exactly the problem semantic versioning is trying to solve. You only pin to the major version, which is supposed to ensure no breaking change occurs.
That can be said about any versioning scheme. The idea that making things inconvenient suddenly makes people do the right thing is pretty naive, if you ask me.
The thing I find most frustrating about semantic version is that even under the strictest adoption it overly legitimizes breaking changes across major versions.
I see far too much refactoring of JS library APIs that is essentially aesthetic across major versions - D3 and Angular spring immediately to mind (I understand the scalability justification for Angular; I just don't really believe it).
To me, having an elegant API is not the highest concern because what I'm interested in is building products, and from that point of view stability is much more important. I do not appreciate a committee of J Random JavaScript Developers randomly dumping extra work into my product backlog because they didn't like the cut of an API's jib, especially when these people have gone absolutely out of their way to drive adoption in the first place.
To give a counter-example: .NET code I wrote in the mid-noughties whilst working at Red Gate still runs today, substantially unmodified. Some of that stuff is 11 or 12 years old. SQL Dependency Tracker is the most obvious example, because the product hasn't changed much since 2006, except for additions to support new object types in new versions of SQL Server. The fact that new .NET versions haven't broken the product has obviously made it much easier to support through the years.
Personally I really hate how everyone (especially web browsers) blindly copies what Google comes up with, without putting much thought into it, whether it is UI changes, versioning or behavior changes.
reply