Practice what you preach. Passive-aggressively sniping at asynchronous code by implying it can never be small or light is not a valid argument. There is no "meat" to such a subjective position.
You're not discussing the topic, you're committing the logical fallacy of suggesting that the only alternative to your position is some absurd extreme. I.e. readable code is pathologically inefficient.
If you want to have a discussion, don't start by putting a ridiculous set of trousers on the straw-man you made.
We are far too willing to dismiss things as mere "personal preference" in this industry. There may be competing theories all supported by some empirical evidence and sound logic, but some claims are directly contradicted by hard data. Those claims are not just a personal preference for the best way to do things. They are measurably, objectively wrong, and failing to say so is just being nice because we don't want to criticise someone.
On the evidence I have found to date, I am coming around to the view that the style of writing many very short functions (say up to 5-6 lines) with little complexity in the logic (say just a single level of nesting) is one approach whose claimed superiority is directly contradicted by empirical data. For example, McConnell discussed the number-of-lines issue in Code Complete years ago, citing multiple studies. Anyone can find still more by investing a few minutes in Google Scholar searches.
Alas, that does not stop bloggers, consultants, trainers and book authors from advocating this programming style, even though it invariably results in the kind of incohesive "spread" that the article mentioned.
Isn't what you're saying kind of exactly the point of the article in the first place? The conclusion even explicitly points out that the aim was not to actually claim small functions are harmful(!), but rather that the converse - small functions are good - isn't inherently true either.
And while the article doesn't make fully concrete examples, it does propose the outlines of a few; enough to get across a meaningful message (to me). Also; arguments that lean on concrete example are more at risk of attacking strawmen precisely because a concrete example can be flawed in irrelevant ways that obscure the underlying principles. Not that I'm opposed to examples - just that most examples are almost necessarily simplifications, and choosing a sourcecode simplification is not categorically different or better than choosing a pseudocode or diagram simplification.
During my earlier years, I would get into all types of dogmatic debates, such as "DRY-considered-harmful" or "small-functions-considered-harmful". With experience, I've realized that such abstract debates are generally pointless. Any principal can lead to bad results when taken to an extreme, or badly implemented. Thus leading to people declaring that-principal-considered-harmful, swinging the pendulum to the opposite extreme, and beginning the cycle all over again.
Now, I find such discussions valuable, but only in the context of concrete examples. Devoid of concrete and realistic examples, the discussion often devolves into attacking strawmen and airy philosophizing. If this article had presented realistic examples of small functions that should have been duplicated and inlined, I think we can then have a much better discussion around it.
That said, I do have to offer a word of warning. It's possible that the author is a good programmer who knows how to incorporate long functions in a way that is still clear and readable. Unfortunately, I've had the misfortune of working with horrendous programmers who write functions that are hundreds of lines long, duplicated all over the place, and are a pain to understand and maintain. Having short-functions and DRYness is indeed prone to abuse, but it still works as a general guideline. Great programmers may be able to ignore these guidelines, but at least it prevents the mediocre ones from shooting themselves (and others) in the foot.
I’m not attempting to compare apples and oranges. I’m ranting about how almost every time someone posts something about tiny programs (like 100 line JS programs or wherever), someone has to chime in about how they’re able to do something so small because they’re using all these abstractions (code) below them and, as such, it’s not an achievement.
There is a lot of disagreement here with the ideas presented, but i think most of the arguments are taking the ideas too literally and extremely... as with any advice there are no hard and fast rules.
I have some guiding principles similar to the author which make me end up essentially doing the same thing (I also like to keep under 80w and roughly 24h, but there are many exceptions)... yet my principles are seemingly contradictory e.g: 1. minimalism, 2. holism.
Notice that they are not imperative, they are merely principles / desirable attributes, by minimalism I usually mean what the author is talking about but in a more general way, keeping small code blocks is one such desirable... but this does not mean split everything up into micro functions that cause a horrible nest of layered calls, because that goes against the principle of holism... this I believe is also similar to what the author wants - holism is part of legibility, whereas reducing your function scope and interface are only immediate legibility of the local code.
Ultimately what i'm talking about is balance, which is why I think it's better to present these ideas to people as principles rather than imperative rules which can be followed rigidly (wrongly).
I see a lot of programming debates start with assertions that look like this:
> Language A is dynamically typed, so it's more expressive, so you'll have working code sooner, so you'll be more productive!
or this:
> Language B is statically typed, so you can't fool the compiler, so your code is more likely to work on the first try, so you'll spend less time debugging and therefore be more productive!
Neither of these statements should carry any weight without actual hard data, no matter how compelling the argument sounds. And the fact that we can't come to an agreement about how to quantify things like productivity and "expressiveness" is itself evidence that these are subjective statements.
The whole point of science is that rational thinking alone is not enough. There must be empirical data. A rationally argued falsehood is as useless as an irrationally argued falsehood or an irrationally argued truth. A beautiful logical argument based on false premises is still wrong. Only empirical data can sort out which it is you have in hand.
Sorry if that offended you, it wasn't my intention; I tried to keep my comment fair to both sides.
Let me however just say that what you're doing in your reply is attacking the straw man. You're setting up a version of my argument and then attacking that, instead of responding to my argument directly.
I'll respond to your comment anyways. Code is art, just as you alluded to in your response. And from a solely artistic point of view, there's a certain glee and wonder at seeing short and smart code. When I browse codegolf on SE, I never fail to be amazed at the frankly fucking brilliant solutions some people come up with.
Having said that, my main point was that code like that, in my opinion, does not belong in a proper project. I get the point of it being practical for a single person, but that codebase is above and beyond what is reasonable. It's simply not nice code.
You're saying that I should not think that the code is not nice because I don't understand it, but you seem to be missing the main point in your flowery metaphors and analogies of art: art is subjective. You might find that code to be beautiful in its own way, and I'm sure that's justified to you, but I do not.
Well, I'm not saying you should discredit advice from people who haven't made great works (IE, most of us), just that it's a question you need to ask to put their commentary in perspective.
Especially when someone like Uncle Bob says: "The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. This is not an assertion that I can justify. I can’t provide any references to research that shows that very small functions are better. What I can tell you is that for nearly four decades I have written functions of all different sizes."
Well, alright, in that context he's asking us to just take his word for it, but there are no tangible arguments here. Whereas Casey Muratori has a much more thoughtful exploration of this topic ( https://caseymuratori.com/blog_0015 ), and he's also written some very excellent code (in terms of solving difficult problems and doing useful things)
>Ugly code that works pays bills. Beautiful code that doesn't work is, in a very literal sense, worthless
what about good code that work? All places where I see your argument prevail, ended up to be not just 'ugly code', but unmaintainable code. And systems that cannot evolve at all.
Personally I don't call coding an art, but I've seen countless times how people people choose bad solution even though better one costs exactly the same, and in a long run - actually cheaper. And they _always_ use this argument - 'code is just a tool, if it solves the problem, it is good'. And then they either leave or have to spend weekends to write even more dirty code just to solve problem they wouldn't have in the first place if they spent a little bit more time thinking about the code.
I think what he argues against is a third culture, where the focus is on implementation of an algorithm in the code, rather than on the algorithm itself -- and generally, the algorithm is what matters more. Just a couple months ago I've seen some code that spent most of the time traversing the entire (large) data structure and counting elements satisfying certain conditions, while 5 minutes of looking at the actual math was all that was needed to see that the count was a simple function of known quantities.
But, his valid point is indeed obscured by a lot of crap and strawmen :(
Who says this? No-one who is a serious practitioner of FP (as opposed to someone who just reads about it in blogs) will tell you that FP automatically guarantees good code. You'll hear that FP goes a long way at reducing some common sources of errors, which is a pretty big deal, but automatically correct code? No-one will claim that.
> The article was not an argument against FP.
I didn't say the article was. Besides the clickbait title, and some dubious assertions (and a poor example with QuickSort), I do agree with the overall assertion that "FP is good, but not enough".
reply