Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

For small dependencies it's often not the implementation that matters as much as the tests. And even if you think you can write all the tests in an afternoon, they're often born out of actual usage, which you can't replicate so easily.

I'd say if a dependency is small enough that you can write it in an afternoon, you can even more easily read it's source and tests and decide if it's high enough quality to use as-is.



sort by: page size:

I came here just to say this.

"This'll take an afternoon" - three weeks later......

Programmers are notorious for this.

BUT even apart from this problem ... you absolutely should use every dependency you can that will save you time.

Try to write less code not more. When you write code you write bugs, add complexity, add scope increase need for testing, increase the cognitive load required to comprehend the software, introduce the need for documentation..... there's a vast array of reason to use existing code even if you truly could estimate it and build it in an afternoon.

You also assume that you understand all the edge cases and fickle aspects of the dependency, all the weird ins and outs that the dependency author probably spent much resources understanding, fixing and bug hunting.

There's a hard fact that proves the above poster to be wrong..... how many dependencies took only an afternoon of time in total to write? Hard to say (maybe look at the github commit history) but I'd guess almost none. It didn't take the dependency author an afternoon, so why will it take you an afternoon?

Even worse .... you just lost an afternoon coding features for your core application.

Multiply this by every dependency that "you could build in an afternoon" and you'll be in Duke Nukem Forever territory.

I'd advise doing the opposite of this articles suggestion.

Find a dependency that will save you an afternoon? Grab it.


All I know is that if enough people rely on the same dependency for long enough, the chance of encountering large bugs becomes smaller and smaller. Especially if the dependency has a stable interface.

Good software gets better the more it gets used and abused, so I tend to stay away from small dependencies, they're usually not worth the time.


Little dependencies are fine. In fact, they're usually preferable: it leads to less code going unused.

To me, the best case scenario is adding a dependency of medium size and complexity that you're confident you could write yourself. This means that if you run into problems, you can just shrug, and then ditch the library, but if it's ok then you save some time. What's terrifying is dependencies so large that you can't fathom the amount of effort required to make them. It also makes it much harder to tell if the library is actually any good. Luckily for your example of time libraries there's normally a "blessed" library for whatever ecosystem you're in. Tiny / super simple dependencies are a complete waste of time, if I can write it in < 1hr I would much prefer to do so.

IMO there's a lot to be said for writing your own version that does 60% of what some library does, but 100% of what you need it to do.


And when you run into a bug or design problem in a dependency of a dependency of a dependency?

It often takes less time to write some code than to understand someone else's code.

Most programmers I've worked with get lost easily when jumping through layers of other people's code. I certainly do.

Solid, well tested dependencies that solve hard problems are worthwhile. But dependencies have a cost in debuggability and maintenance, so it's worth using them with care. And often, they aren't worth the time, when compared to writing a dozen lines of code.


Never use a dependency if you could write something of equivalent quality in afternoon. Seems reasonable enough.

Hard same. After having a few experiences where a dependency hardly does anything, my approach is to read the docs first and see if I can implement the parts I need in a couple of hours. Often I can.

20 afternoon dependencies -> now you have month of work, more code to test (and write tests), more code to support in the future, more code to understand for new developer. Add edge cases which you are not aware of, you are screwed

my basic rule if dependency has small code base (<300-500 lines) I will copy/paste that chunk of code into repo and refer original repo (assuming LICENSE is appropriate)


The risk is much bigger than simple lifecycle testing will account for. Upstream micro libraries could easily disappear or have bugs, leaving you spending more time trying to workaround than if you had just written the darn thing yourself in the first place. Or, you suddenly have a new requirement that doesn't play well with the lib.

Programmers loves dependencies because it lets them pass the buck, but each and every 3rd party module is a potential time bomb waiting to happen.


It can go wrong both ways. But if changing real dependencies introduces cascading failures that have to be changed in a lot of tests, I'd suspect that there is expressive or structural duplication in the tests that could be eliminated.

In my experience, I have seen some Jurassic-scale disasters, because of poor dependency choices.

I think a lot of people just google for dependencies, and then add the first one that has a slick Web site, without thinking much about the code they are adding.

I am not a "never dependency" person, but I am anal about quality. Totally obsessed. I feel that quality is something that many, many programmers eschew, in favor of bling and buzzwords.

For me, I won't put my seal on something until I have tested it six ways to Sunday. In some cases, it may be unit tests, but, more often, it is a test harness, which can be a much more ambitious project than a few XCTests[0]. In fact, I am running into a lot of folks that don't know what a test harness is; which is jaw-dropping.

Since I do a lot of device control stuff, unit tests are not particularly useful. In order to create unit tests that would be effective, I'd need to design a huge mock, and that would not be worth it; likely introducing more problems than it solves.

An example is that I am currently developing a cross [Apple] platform Bluetooth LE Central abstraction driver[1]. This needs to have test harnesses in all the target systems (iOS/iPadOS, MacOS, WatchOS and TVOS). I actually have it driving a released app[2] (which is really just a "productized" implementation of the iOS test harness), but I do not consider the module ready for release, as I have not completed all of the test harnesses. I am in the middle of the WatchOS harness now. I may "productize" the MacOS test harness. My test harnesses are really serious bits of code. Complete, ready-to-ship apps, for the most part. Going from a test harness to a shipping app isn't a big deal.

[0] https://medium.com/chrismarshallny/testing-harness-vs-unit-4...

[1] https://github.com/RiftValleySoftware/RVS_BlueThoth

[2] https://apps.apple.com/us/app/blue-van-clef/id1511428132


I'm not really saying you should write anything from scratch that you don't have to, just that you should treat a dependency as something that you did. Therefore, review it, check compatibility, and have some named team responsible for its maintenance and availability.

It depends on situation. As an example, it could be difficult to write tests when you have circular dependency. As well depends on the language there could be strange ’side effects’ which would be difficult to debug.

While I agree that if you think it'll just take an afternoon, for the sake of this article it had better!

But conceding that charitable assumption to the article, I agree with its basic premise: dependencies cost a lot of time in diffuse, non-codey ways.

There are AAA dependencies you pull into every project, but most other dependencies require a good degree of due diligence, evaluation, risk, and their own long-term maintainance.

Its not that it always tips the scales all the way to 'roll your own', but I think the cost of new dependencies is underrated.


The article highlights a huge pitfall of having lots of small dependencies, provides no real remediation, and says small dependencies are easier to reason about, so they're better anyway.

I disagree. I personally find that with large projects, small dependencies actually become much harder to reason about - they sort of get lost in the codebase. Not to mention the duplication of small packages on npm doing similar things - with a large team, it's easy to add both to your codebase and then we're in a new sort of duplicative mess.


If you can actually do that, it reflects positively on your dependency's ability to make simple cases short and easy. Many libraries and frameworks actually don't.

That's what this is about.


As usual, everyone here will start their "I'm better than this" comments.

If you've ever used a code dependency, you are a target for malicious code. That's just how it is.

In this case, using small packages like this helps in...

1. Reliability - these packages are typically 100% unit tested

2. Convenience

3. Reduce codebase size. I can't imagine having to copy paste every little small function into a mega utils file.


Oh that's certainly a good point and a deficiency in my design.

But then again if you're only testing H against A's vendored dependencies specifically, you're missing testing it against B's. I'm not sure how you would fix that.


> I'd argue that it's not exactly "better", but more that it just makes different tradeoffs.

Yes, it makes tradeoffs that are different, and those tradeoffs are different in that they're better. Often, not always. That's the statement in a nutshell.

> Using a full well-written* dependency for something not part of your "core competency" (even something seemingly trivial) is often a better choice in my opinion.

If it's outside your core competency, you can't really judge if a dependency is well-written. Popularity is an entirely unreliable quality metric. "Well-tested" also means nothing, a lot of people write completely useless unit tests but still fail on integration. How much time are you going to spend researching and vetting the code and the development "team" for that "seemingly trivial" dependency? How about writing a few lines of code and building a little bit of understanding instead? You can still use other people's code as a reference, even copy parts of it.

> That dependency will often know the "unknown unknowns" to you since their "whole purpose" is solving that one issue.

Sure, a few lines of code may have unknown unknowns. You know what else has? Other developers. Unbeknownst to you, "the team" could be the left-pad guy, who turns out to be "political". "The team" could be the guy that just hands over an unmaintained repository to a cryto-thief from China. That one took several weeks to get noticed by the community.

You start with one little dependency for one use-case, you end up with hundreds if not thousands of dependencies. You realistically have no capacity to deal with this in a diligent manner.

One could argue that by using "modern Javascript" with its pathological reliance on micro-dependencies, one has already given up control to the hive of random developers anyway, so one more dependency wouldn't hurt. I probably would agree with that. That doesn't mean every Javascript program would be better off this way.

next

Legal | privacy