> A little bit of short-term friction averts long-term intractability.
It's not a bit of short-term friction, it's constant friction. As long as development is continuing, there's always something about the API that can be improved and would really pay off if only that was actually its final state and not just the next step until we find a good reason to break it again.
> Great so let's just leave all of the legacy terribleness in it. Let's not address all the quirks and horribleness that sounded like a good idea at the time. Let's not let the language evolve.
Nobody is saying that. The golden rule of API design is do not break your customers.
The way you let a language or API evolve is by introducing a new API. For example, instead of changing the semantics of count() you might introduce len() and then try and not fuck it up this time.
> But the I in API stands for interface. A good interface can survive rewrites of what's underneath.
Absolutely!
But a good interface is also a really difficult thing to engineer, in part because it seems like a relatively easy thing. The gotchas don't become apparent until long after you ship.
Sometimes I wonder if some of this is just cost-cutting. A difficult task is an expensive task. Other times, I wonder if it's because API design is an actual specialty, and there are too many devs doing it who don't have the chops for it.
> With something like a Java interface, it doesn't really matter if those APIs change because all the code is built and released together at HEAD.
Right, this is the thing that winds up following from the monorepo - the monorepo problem goes away if we just release everything, the whole fuckin’ system, into production every time we make any update to master. If that’s your system, hats off to you, I’m not nearly that good of an engineer.
> There is tension between the benefit of designing user-friendly APIs and the temptation to overspend on effort to design great APIs. It is sometimes hard to anticipate how often an API will be used.
I think in antirez's case there is little 'underestimation' :)
Returning to the more general practice - I agree, it's easy to spend several days building a nice abstraction for something that is not extended again, or something that is extended in a way different from what was anticipated and the abstraction doesn't help. From my experience what has worked the best in terms of internal structure, is to write a basic and working version (this is when you don't know that there are any other users) and to refactor it into a nice abstraction when you reach the point that it needs to be extended - also a huge positive is once it needs to be extended, you know what the abstraction is going to be used for.
> If you're building decoupled code you shouldn't need cross project changes.
That's the theory, but in practice designing robust, future proof APIs has proven to be really hard in a lot of cases. You're then left with the option of supporting old APIs forever or migrating dependent code to new APIs, both of which are difficult in their own ways.
> I also maintain that for a lot of the programming that makes the world go round, it's not that hard.
I guess it depends. I maintain that getting something mostly working is not that hard. But I haven't worked on a webapp that didn't have many obvious (to me) concurrency bugs in them.
So does that make it easy or hard? I dunno. Maybe writing correct webapps is hard, but maybe their correctness doesn't really matter.
> We definitely need to do that with our APIs and designs (aka. more incremental refactoring).
Not with APIs we've shipped. Then it becomes breaking changes that appear pointless to others. Even when it's logically worth it to go back and fix APIs, it's rarely worth it practically. If no one is using your APIs, then do whatever. If your APIs are popular, you might find yourself in "version next" limbo like so many major projects that redesigned and couldn't convince their users to follow.
But you can still evolve APIs; breaking changes can occur much less than normal changes. And even then you can version them and run old and new in parallel if you want. I don't see why that would be slow.
I also don't see why making things in a way that's hard to change suddenly makes something amenable to agile.
> Also most code starts off a bit hacky. That's the way it should be. It gets refactored once it's confirmed that the feature is needed.
I def think this is good, sometimes I just worry after hearing horror stories of dev teams literally doing a year of refactoring code for bad implementation in the beginning or problems with scalability in the future which leads to delays. But I guess that is just the natural progression of a product and needs to be faced then and not now. Also that means when we do face that issue, we passed the first obstacle - our product/feature is wanted by the user
> But if you get it wrong, what happens? You have to refactor (if you didn't get too far in), or start again if you did. No big deal - code is perfectly amenable to this.
This works nicely on a small scale, but not with the architectural problems.
> that is a business problem not an architectural problem
It's an architectural problem because this business constraint forces me to get the architecture right the first time - I won't get another chance.
There's no business reality where it's OK to say "just give me another 2 years to re-do the system in a (probably) better way".
> Then, a new, fresh alternative arises and moves your library into oblivion. This new library is awesome. It doesn’t suffer from a bloated API, it just works. It does more or less the same as your library but is faster and easier to use. Over time though, this new library will go through the same cycle as yours. It gets bloated too and will be replaced with something fresh at some time.
I have been recognizing this cycle more and more, I think it's almost a fundamental law of software engineering -- the war between flexibility and over-engineering/over-abstraction.
> The key thing with prototypes is that you have to mercilessly rip that code out before people start extending it and relying on it otherwise it's going to stick around.
A way to avoid this is to start with a clearly defined data model between components, and then _within the context of each of those_ hit the gas towards an MVP, flesh that out, refactor, etc etc etc.
Not always a possibility, I'll grant, but a ton of headache can be saved by being religious about API-ifying those services which can be made into APIs. Stable I/O, chaotic move-fast-break-stuff for internals.
This is so true. It's tricky because some things are worth thinking ahead a little - but on average, I've learned that it's far better to focus on making things easy to change than make them directly accommodate future requirements but (often at the cost of immense complexity)... If the code is so small and simple that you can easily re-write it, then it is future proof and easy to understand and maintain, win win.
It's easy enough to understand this abstractly, but takes practice to know when to plan ahead and when not to - but a good oversimplification is that: if you know it's a future requirement, it's worth thinking about, maybe even worth making a space in your architecture/api/data whatever; If it's an unknown, just don't bother, try to keep the code simple instead so that you can adapt.
> You are looking for truth? Evolving APIs are evidence that people are unhappy with these interfaces,
That tracks, but this does not necessarily follow:
> demonstrating that the old APIs are not ideal.
I've been around the tech industry long enough to see several cycles of practitioners going back and forth between technologies and approaches. E.g. strong type systems (C++, Java) -> loose type systems (Python, JS) -> strong type systems (Rust, TypeScript). And each time the tide shifts there are always plenty of arguments trying to show why the previous approach was objectively worse and the new one is better.
That's not to say that nothing is moving forward, but the fact that people are unhappy with a technology doesn't mean that technology is objectively inferior to a proposed replacement. Sometimes it's just fashion.
It's not a bit of short-term friction, it's constant friction. As long as development is continuing, there's always something about the API that can be improved and would really pay off if only that was actually its final state and not just the next step until we find a good reason to break it again.
reply