Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
How A Pull Request Rocked My World (clayallsopp.com) similar stories update story
320.0 points by 10char | karma 2009 | avg karma 18.43 2013-02-02 19:55:13+00:00 | hide | past | favorite | 158 comments



view as:

IIRC, Fabio works at the Swiss Ruby shop Simplificator, who are all very nice chaps indeed! Glad he helped you along.

EDIT: I just checked, yes he does (well, unless we're talking about another Fabio of course, which is possible...)


The classic "Replace Conditional with Polymorphism" design pattern, which took me way to long to realize that it also applies to Python.

Indeed it does. I have a large code base at work that uses this pattern for Django models with a bit of metaprogramming to create a template for types of input sensors. It's amazing what this kind of refactor can do when you need to turn a long conditional into something more generally extensible. New sensor types with custom column configurations take an average of 5 lines of code now, instead of about 30 before.

Yup. I remember that particular realization. Now switch statements in OO languages just look broken to me. Thanks Python!

Yes, it's cataloged in Martin Fowler's excellent book Refactoring, Improving the Design of Existing Code in Chapter 9.

I'd recommend the book strongly.



I like this kind of refactoring, make you feel like: http://thejoysofcode.com/post/36047317385/when-i-replace-200...

I don't love this refactor. I especially don't like how it generates names from other names -- e.g., given 'switch'/'submit', appends '_row', and then converts it to CamelCase so you get 'SwitchRow'/'SubmitRow', then instantiates the class named that.

Here's why: When I am maintaining somebody else's code, even if I guess that this is going on, I rarely trust these kinds of name-generation tricks. When I need to change some code, I'm going to grep the codebase for 'SwitchRow' and add a new line in whatever conditional to instantiate my new class. When grep can't find any instances of SwitchRow's constructor being called, I become suspicious and it costs me more time, because I am making sure I properly understand what is actually going on.


...I rarely trust these kinds of name-generation tricks.

So you're probably not a big fan of Rails, then?


and what does `trust` mean? You don't trust that correct object won't be given back to you?

I suppose more like trust whether the resulting code will always be correct.

what about using a multiline YAML text to be loaded with automatic generation of objects from class name strings? (reminder!)

I'm actually not a huge fan of implicit name-generation tricks, but I'm also not entirely opposed to them.

The trick here I think is consistency - is name-generated type dispatch the standard across the entirety of the codebase? If it is, go wild. If it's not then I'd be much more wary of it.

Consistency and building correct expectations for future maintainers is pretty important. Doing a smart trick in one place but failing to do it in other places where it makes sense IMO tends to create a bigger mess than doing it the dumb-but-safe way everywhere.


I so agree with you. There are few programming mistakes that consistency in use does ameliorate. On the other hand, no matter how "good" the code, if it's inconsistent, it's bad.

I disagree. Consistency is only valuable so long as it helps clarity and understanding. While that is most often the case, there are exceptions.

Agree. Names should not matter. You are going to have to document name construction. If your library users use IDEs you are going to have to get them to understand names.

Agreed. This "refactor" is actually a hack. It shouldn't pass code review.

This is especially true for those of us who do application support.

When we are investigating application stack traces from production for a large application, it is awful when you discover that the developers decided to be clever with dynamic function names, making a large code base very difficult to support based on available logs.

If you think you're being clever you're probably just being a dick.


Given this is ruby code, I think it's acceptable, and well-structured enough to be understandable.

While the implementation itself might be a bit 'eugh' because of the meta-programming involved, there was enough thought put into code organisation to allow you to grep `RowType` to find the module that namespaces all these dynamically instantiated classes.

Hell, if the code in the tree was arranged correctly, then `tree | grep row_type` would give you the folder where they all lived.


tl;dr: polymorphism is usually better than if/else.


I don't agree with this change. Based on the code, it looks like there should be an enumeration of cell_types somewhere, a button should have a cell_type, and then there's a map from cell_type to construction_fn (or a polymorphic set of classes to serve as the map.)

Metaprogramming is something you do when you can't do it easily in the usual way. If a dispatch table or inheritance takes care of something, use the existing language features instead of building your own.

(EDIT: On examination, there already is a RowType, and the pull request is a lot closer to my suggestion than I first thought, so mostly ignore this.)


Refactors like this are super cool and often needed but I sometimes have to think why we don't store the easier to read code as well.

I know we have git and we can always do a diff to see what changed. What I am talking about is a more holistic view. Being able to search a code base for SwitchRow and then somehow the code comes up where it was refactored into this dynamic programming style.

Git can do this but only if you are actively looking for it. I wish you could do a metacomment on the refactor without cluttering up the code.


Because if you understand the new code you don't need the "easier to read" code.

Constraining yourself to only writing code a beginner can understand is stupid. Maintaining good code and "easier to read for a beginner" code in tandem is a waste of man hours.


The article is very humble, but in my view, seems to go a little too far.

"You can reuse code with inheritance. Absolutely crazy. Brilliant.

All of this blew me away."

Honestly?


My thought exactly. Seems like a pretty basic usage of polymorphism/inheritance.

Thats a very critical point that you just have to understand, to "get". Many people know and learn about object oriented design, but when tasked to actually write a program based on these principles they will often revert to code where they make distinctions based on the type of what they have been handed.

TLDR: dynamic_cast is bad design


I guess he's gonna come back with another article when he realizes composition > inheritance.

You'll need to wait like 5 years if he was anything like me

Using static conditions rather than polymorphism seems to just be a common thing that people tend to do, regardless of skill level or quantity of experience. It's not uncommon for people just not to think of using polymorphism, if the objects in question wouldn't really model any particular kind of object as such (they're just there to provide a table of function pointers). This says nothing about the quality of their code, or the general OOness of their designs, or the number of digits in their IQ.

It's a bit of a shame, because I've found code to be reliably improved by using objects as simple tables of function pointers. The same is less often true for using objects as models of things.


That is strange to hear. That kind of polymorphism is literally the first example given in any OO tutorial. "Bark", "Meow", "Oink", etc.

The traditional animals/shapes/cars/etc. examples all use objects to represent actual things. But sometimes, one's objects don't reflect anything that you're trying to actually model. The need for them has just sort of arisen from the way the code has structured. They don't represent anything as such, they're just a way of putting in some extensible runtime dispatch. And it seems like people often prefer to use conditionals rather than objects for this situation. Not really sure why. Perhaps most people think of mere objectness as having some meaning, that makes using an object inappropriate for this kind of situation? I'm not sure.

If you keep an eye out for this, I'm pretty sure you'll start to spot it!


To each his own.

For me it was epoll() that blew me away. Absolutely crazy. Brilliant. But in retrospect so damn obvious.


epoll() (or its predecessor poll()) is a kind of low-level API function, whereas polymorphism is a general and fundamental programming concept.

It's irrelevant. The point is that either was an example of dramatically different way of looking at things. "You don't know what you don't know, but now you do."

At that point I closed the article and thought "Wtf HN, why was this on the front page?". Then I remembered that anything Ruby-related gets automatic upvotes, even if the author does not know the first thing about OOP.

I also did not got the point.

It is as if the OP did not grasp what OOP is all about.


I didn't get that from the article. What I heard was the guy was amazed that someone took the time to attempt to better his code and (in his opinion) made it much better.

I agree with that a large extent - that's something that always amazes me about open-source. You're slogging along doing your thing, and then all of a sudden someone drops a huge patch on you that makes so much sense.

With that said; I did go back and re-read it, and your reading could be correct, it's hard to say. Does have a bit of humble brag to it for sure.


Evocative writing for a technical post "serpentine" if/else, "rickety door" - I know nothing about iOS and enjoyed reading it.

I'm not a developer, but I read this article anyway.

All of this blew me away. Like this "@mordaroso" fellow strolled on up, cocked his knee skyward, and jettisoned his leg into the rickety wooden door that previously sheltered my mind.

That was just great writing.


I have to disagree. "Jettison" does not fit the simile he was going for, at all.

It's not great, but it could be good. Replace "jettison" and cut the cliched "All of this blew me away" and the resulting paragraph is descriptive and fairly tight.

Jettisoning ones leg into a door sounds painful. Unless one has a prosthetic, then I must wonder why one is throwing it at a door. Do they not need it to stand?

I disagree. I had to parse the sentence three times to make all the pieces fit. :\

This google tech talk is basically about the same thing, eliminating conditionals with polymorphism: https://www.youtube.com/watch?feature=player_embedded&v=...

On the first glance it seems great, but much more complex to understand and debug.

The if tree was certainly not pretty, but straightforward - it did its thing. It was something that could be given to an average developer to work on and improve.

The refactored code will require someone that's not just an average developer - if only because you need to read the article to understand it if you're not familiar with javascript, while the initial code needed only a glance to understand what it did.

IMHO, that's just complexity for complexity sake - unless you have plans to take advantage of the new flexibility it provides.

(and if you have to find, hire and pay that 'better than average' person, it might not be such a good thing.)


This is not written in javascript, it is ruby. And I think your developer is way below average if he can't understand the pattern used in the refactored code.

If you have a ruby programmer who can't understand this, fire them. And get a new one. This type of stuff is all over ruby code.

It's scary to think that this guy who was just shocked to discover a very basic concept of object oriented programming (polymorphism) has written his own book on iOS programming:

http://clayallsopp.com/posts/writing-a-programming-book


How many books did you wrote ?

How is that relevant?

Just pointing that his little ad hominem attack has no place on HN.

It would be an ad hominem if the book was about gardening but it's not, it's about programming.

Ah, the good ol' ad hominem.

That's not an Ad Hominem.

I assume you are responding to me. Yes, yes it is. From Wiki "Argumentum ad hominem, is an argument made personally against an opponent instead of against their argument." AlexeyBrin is implying that because greenyoda may not have written any programming books, that his argument is less valid. AlexeyBrin's argument is not refuting greenyoda's central point, that it's somewhat strange that the blog post author has written a programming book about Ruby, despite not being aware of what some argue are the language's commonly used features.

Sorry; I was actually responding to AlexeyBrin on https://news.ycombinator.com/item?id=5157519

Sorry for the confusion ;)


I did write about twelve books and I still consider OP's point to be very valid.

The blog entry is great in that it shows how collaborating over the Internet can lead to improvement but it's indeed a bit concerning that someone who didn't know what polymorphism was had been publishing a programming book.


Clay wrote a book about RubyMotion (a Ruby compiler for iOS), not a book that teaches OOP. His knowledge about polymorphism is not so relevant for this book (I think he "knows" what polymorphism is but he was genuinely amazed by the way you can use it for this particular case).

> very valid

Validity is a binary state. Don't modify it with an adverb.


"He is the creator of some of the most widely used RubyMotion libraries"

Then everyone is surprised when critical bugs come to light in Ruby stuff.


I think this comment is a little unwarranted. Polymorphism wasn't the concept he discovered - the idea of generating the classes (metaprogramming) and the "clever" (well, some would say so at least) was interesting to him.

I also don't like how your comment tries to make fun of his efforts to learn something new.


> I also don't like how your comment tries to make fun of his efforts to learn something new.

The problem is not that he's learning, he's teaching.


Why is that a problem? As I pointed out, he's just showed us a metaprogramming trick he thought was cool. That is no indication that he is an inept programmer or someone that should be allowed to teach something like RubyMotion (if he was writing a book on metaprogramming, that's another thing).

No problem at all, I was talking about the book. Even if he hasn't lied, when one reads the "author" section, it sounds like he's an expert. In the description of the book it also says it can help you even if you are a "veteran", which might be true (or not), but makes one think he is a veteran too. It's misleading.

What, teachers should never learn?

The basics? Yes they should learn that, years before starting to teach.

Metaprogramming isn't a basic part of any programmer's education.

In terms of Gang of Four patterns, I believe this would be Strategy + Factory Method. The book is a little antiquated (1994) but it or a better successor should be required reading for any programmer.

I don't believe that its terribly antiquated. Pretty much everything bases itself off of this book.

Edit: drafted iPad - fixed spelling. Trilby - does anyone even use that word anymore?!?


"I don't believe that it's the trilby antiquated. Pretty much everything bases itself off of this book."

Hardly.

The GoF book is sweet if in this day and age you're still doing Java/C# but if you're programming in a language like, say, Clojure you can throw about 75% of the GoF book into the trash.

pg said to use a Lisp to beat the average and that's what I'm doing. I'm certainly not accepting the status-quo and considering that, say, the "visitor pattern" is a godsend. Most of these "patterns" only make sense in a non-functional and OO language and are only needed because of missing languages concepts (like the lack of high-order function and/or the lack of real macros).


I'll be honest - I'm not terribly familiar with functional programming languages.

Design patterns are, admittedly, workarounds for language deficiencies, but the book is still recent because far more people program in Java and C# than Clojure.

I don't think Clojure is really object-oriented, and therefore wouldn't recommend the use of object-oriented design patterns at all. But as the dominant paradigm of our age, I think it's worth understanding OOP even if the final decision is to reject it.

It has a couple of patterns that have been more or less rejected by modern OO leaders:

1. "Singleton" (enough said) 2. "Template method" is more or less "inheritance over delegation"

There's also a couple like Flyweight that are of such narrow applicability that they hardly merit placement on the short list with classics like Composites and Factories, and a few new items like Dependency Injection are surely worth a mention whether you want to argue for or against it.

Finally it's got an approach to presenting and explaining the patterns that clearly predates the web and modern attention spans. All that said, it's still a classic, but depending on the preferred language and technical proficiency of the reader it wouldn't always be my recommended introduction to design patterns.


What is your recommended introduction to design patterns? I've been meaning to read the original but I do get that old-school clunky Java feel from it. I'd be very interested in an updated Design Patterns for OO languages with mild functional capabilities like C++11, C#, Python.

It predates Java, and there are a number of examples that use SmallTalk.

I liked Head First Design Patterns. It's fun and covers most of GoF patterns (with a brief overview of leftovers). The examples (although fairly trivial) are in Java.

Why has the singleton pattern been rejected? Similarly, why has the template method been rejected? Both seem entirely reasonable patterns to me - what is so wrong with them?

Never mind. The singleton pattern, I can see why it should be used judiciously - to my mind none of the arguments I've read are completely compelling.

However, the template method pattern I do understand now. Far better to define an interface, then inject an instance object that implements the interface and work on that.


Reading the comments in this thread is very interesting to me, because each programmer seems certain of the superiority of his own programming style.

We have some comments that disparage the change, saying that it obfuscates the code and makes it more difficult to understand.

We have others saying that this change is a basic technique and it's "shocking" that the author wrote a book without this simple knowledge.

What I take from this is yet another situation of people arguing about something highly subjective that is closely tied to their identity. Good programming is not an objective concept. People respond differently to different programming techniques and what works better for one person may greatly confuse another.


+1 for Good programming is not an objective concept.

== Programming sucks nowadays.

There is not even a consensus on what is good in programming! Take a look at other professional cultures: people work extremely hard to develop the skills that everybody else in their profession have. What does a software developer do? Whatever they like on their own, there is no professional culture whatsoever: "Don't give a crap about the humankind's experience, I'll invent everything on my own the way I like".

This is actually unbelievable how different programming is in this aspect from other disciplines. I don't even mentor Junior Developers because of that, as I can't refer to anything authoritative and say "Do it like this" because there is always a bunch of folks that do it the opposite way. There is almost nothing to rely on.


The nature of programming is that any consensus solution can be automated. That's a good thing.

I'm always amazed though the extent to which that isn't followed through on. Consider the code for logging in and storing user credentials. The level of collaboration on this is at the level of a list of best practices. Why are we still letting anyone (re)write this code?

Okay, how do you use /bin/login over HTTP?

Situations change. So do languages. I hope you can find an Algol compiler, but not Algol-68---that's too new for this code.


If you can't draw upon your past mistakes, or techniques that have failed for you before, you shouldn't be mentoring.

Put more positively, use your experience to guide others.


It's not that there's no professional culture in programming, it's that programming has a problem/solution space orders of magnitude larger than any other profession that has ever existed. If we could agree on some constraints of what software does and what the priorities are then we could develop more concrete professional standards, but that's not how it works. Software is the stuff of pure thought; it is layer upon layer of abstraction that towers up to henceforth unknown data processing tasks. As messy as it is, we can only strive to further the state of the art as it applies to the task at hand. There's no way to definitively pit NASA's process against Facebook's against a random perl data cruncher. None of this means programming doesn't have a professional culture, just that it will take a while longer to distill the strongest knowledge and it will necessarily be a broader than the other professional standards.

Another thing we can expect is further specialization with different standards.

> There is almost nothing to rely on.

You could rely on your experience.


I would have thought www.swebok.org (Software Engineering Body of Knowledge) represents some consensus on what is good in programming?

There's stuff to rely on, it's just that mostly nobody wants to spend the time reading it.

I dont agree with that.

There is a confusion about programming though, an idea that it is a purely technical discipline, like laying bricks when in reality its a design discipline, like designing a building.

If you wanted to build a house you wouldnt just hire a couple of bricklayers and tell them "do it". Yet thats what happens in programming...


Early in my career, a project manager said to me, "I've never heard one programmer praise another programmer's code. They always criticize!" I think that reinforced for me both that someone's approach might be good even if it isn't my own, and also that it's important to say when something is done well. Of course, there's also just a lot of bad code out there.... :-)

A lot of programmers are insecure. They were the nerds in school, not popular, not good at sports, etc. Programming is the one thing they can do where they feel in control, and confident. So they naturally tend to be defensive about it.

> A lot of programmers are insecure.

You mean a lot of persons? If you don't, I wonder where you got this from. Personal experience maybe?

Anecdotic: I have always been good at sports, popular, and confident, and I criticized this article because he used a kind of link-bait title (should have been "Today I learned about inheritance and it's awesome", something like that), and he also turned out to be selling programming books like he was an expert, which is scammy, and reinforces the fact that you can only find good books through recommendations from your peers.


So he didn't know some aspect of polymorphism. Is that pertinent to his book? It's not as if using if/else constructs is bad or wrong. In some fields of development it would be preferred, for blatant clarity.

Does it follow that his book is bad or wrong because it was not written with understanding of this aspect of polymorphism?


"You can reuse code with inheritance. Absolutely crazy. Brilliant.

All of this blew me away."


I don't rightly understand why you would choose to assume the worst possible interpretation of that line and take it as grounds to heap scorn upon the writer (it strikes me as a tad uncharitable, and needlessly hostile), but

http://news.ycombinator.com/item?id=5160287


> Anecdotic: I have always been good at sports, popular, and confident,

I'm fairly certain this is a lie, given the way you've presented yourself in this thread.


I'm fairly certain you don't think it's a lie, and you are just envious.

Either way, I don't see how any of this matters. I explained why I criticized the guy, and instead of working on that, you tried to undermine the anecdotal part.


anecdotal

-1 for the ad hominem: "... and you are just envious"

Are you kidding me? I was copying his style, to show him how irrelevant and childish the statement was.

Also, it wasn't an ad hominem, since there wasn't any argument being debated. Get your fallacies straight.

I guess this is why you have 9999 submissions with 1 point. To be able to go around downvoting and incorrectly calling people out on a fallacy (the new trick you have learned).


> Also, it wasn't an ad hominem ...

I don't get it. What's the point of replying to a perceived false accusation of ad hominem with three more ad hominems?

"Get your fallacies straight", "I guess this is why you have ...", "(the new trick you have learned)"


Let me give you an honest advice on how to not be a fallacies noob. Try explaining what's wrong with their argument, without incurring in the arrogant attitude of just mentioning the name of the fallacy and downvoting. That way you will stop trying to make every argument fit into a fallacy, and start truly understanding what (if anything) is wrong with it.

I must have corrected the 'ad hominem' thing at least 5 times today, so I'm tired. It's not ad hominem if it's not trying to undermine an argument (it's just insulting or trolling), and it's not ad hominem either if it's relevant (eg: programmer does not know the basics, that fact comes to light, and is used against him in a programming discussion).


No undermining, I just think you're lying about being popular and confident and good at sports.

It's ok if that upsets you.


It's really weird for you to jump to that conclusion.

-1 for the ad hominem: "... given the way you've presented yourself in this thread"

I spent a few years as the sole maintainer of a moderately complex enterprise app written prettly much entirely in PLSQL. Hundreds of thousands of lines of it. By the time I left a reasonable proportion of that was code I'd added.

A few months after I'd left I got a phone call from the guy who'd taken over the project - he rang specifically to compliment me on the code and how easy it was for him to understand what was going on and how much better that had made his work. Didn't really know how to respond as I've never had that before, but it was nice.

Not blowing my own trumpet, so to speak (although it is something I'm kinda proud of) but to provide a counter data point to your PM's


Although I've never gone to the effort of actually calling somebody up to thank them, I have gone out of my way to thank colleagues whose code I've been maintaining if I have found it easy to work with.

Code that is: consistent in style and approach (even if it's different to mine) easy to to reason about has clear comments detailing why something was done the way it was

is hard to do consistently, and very often stands out from the spaghetti code that's present in other parts of the system.

The reason I do this is firstly to let the person know I appreciate the effort they have gone to, and secondly to signal to the other developers on the project that this is something worth trying to attain, and at least one person values these things.


Thanks for sharing your story! Now that I'm a freelancer, I see lots of different code bases, and I've seen a few that are really nice. It feels great to see and appreciate someone's else good work. I make a point of letting the people in charge know that the previous/other contributors have done a great job.

One of the great things about joining a well-written project is that usually I learn something new. If that guy was so struck by your code that he called you up, I bet he learned a few things, too.

As in the OP's story, sometimes the code I learn most from isn't immediately obvious, and I have to "go with it" for a bit before I see what's happening. But that takes some openness, perhaps something like what Zen folks call "beginner's mind."

There are times when, like you, I have to start tearing things out and rewriting chunks of functionality, but in general I think programmers are too quick to take that step. My PM anecdote taught me the importance of reading other folks' code charitably, really trying to understand why they made the choices they did, and doing my best to respect and follow their approach. Textual interpretation has the "principle of charity," and reading code probably should too.


I maintain an old ircII script. The guy that originally wrote it (srfrog - the LiCe script) is just an amazing programmer. The whole thing is clean and beautiful.

I feel ashamed when I touch it I dirty it up.


both arguments are right, but not because of a subjective goodness. because they're talking about 2 different parts of the patch:

- the "bad" part is RowType.for(), where a type is dynamically called forth based on manipulations on some string. I agree that a more concrete mapping from string-to-type would be preferable (I'm assuming the use of a string is required for some reason).

- the "good" part of the patch is that he replaced a conditional with type dispatch. I agree that this is a basic technique of object-oriented programming.


A technique can be both basic/obvious and not a course of action that everyone would agree upon.

The surprise isn't that he didn't make the choice to do it that way. The surprise is that the choice didn't occur to him. The feedback would be very different if he simply had made a different choice.


I think that programming is very close to how we actually think. Programmers tend to be objective people who like distinguishing good from bad and better from worse. They're generally analyzers to the core.

But we all have some personal aspect of how we see the world. How we organize logic, view priorities, and such, is all personal and unique, no matter how much we may feel that it's objective.

Programming is an expression of that inner thought life. It's how you organize, categorize, and optimize, all laid out in bare raw form. It's an expression not just of our identity, but to an extent of how we exist. Being who we are, we usually tend to think highly of ourselves and our way of doing things, so it's only natural that we think highly our code style.


Something that I've come to realize only in the last couple of years: most programming arguments revolve around personal preference. There are of course optimizations that can be made to actually improve machine performance; these can be found reasonably simply through profiling. At the end of the day, most arguments revolve around "style".

It's actually a very tricky subject. If you're working on SMe apps, including startups, most often a given project'style will flow from the lead (or the person/people who initiated the project). They may be good, they may be bad. Invariably, less experienced developers are hired and learn Ro code following this style. Eventually, those less experienced developers become experienced, and leave to persue other projects. And they take that style with them.

Depending on the role they have, there is likely still a lead above them. And this is where the arguments begin. The lead had his/her style. The new dev has experience with another style, and it may or may not mesh with the new lead's style.

And so the problem perpetuates.

It can be very difficult to work with those devs in the early-middle stage of their career. If they are good people, they learn to be flexible, adopt the best parts of all the styles they are exposed to, and eventually become great leads. But many will not, are confrontational, and ultimately poisonous to the teams they are on.

If I can quote my favorite philosopher, Ted "Theodore" Logan: "if the only true wisdom lies in knowing, then you know nothing"


A programming argument that can be convincingly shown to be only a matter of personal preference is irrelevant, but I strongly disagree those are that common. Various programming language constructs have various properties and if you had the opportunity to maintain a single project for some 5-10 years, to experiment with different implementations of same functionalities etc. you learn that certain techniques prevent problems and certain others provoke them.

Look at the writing of Joshua Bloch based on his experience with designing the Java standard libraries, you will find plenty of arguments that you could think are about "style", but in fact he always includes a very detailed and concrete discussion about why one way is superior to the other. Similarly with some of the articles of John Carmack from idSoftware. For me the revelation was that there is indeed no such thing as "style" so that some techniques are just "prettier" than the others, it is all a matter of consequences a given implementation has for other developers using the code and for its maintainers.


I knew the comments were going to be exactly as you described before I clicked the comments link (and had this in mind http://xkcd.com/1053/) and I'm really glad your comment is on top and visible for everyone to digest before they start handing out their own judgements.

These two statements are not mutually exclusive - the technique is basic and sometimes its drawbacks may outweight its benefits. While applicability of the technique in a particular situation may be a matter of discussion, ignorance of its existence is shocking considering that the author has reportedly written a book on programming.

When people disagree that doesn't imply that the situation is highly subjective and all opinions are equally valid. In this case I believe they most certainly are not and I think you are drawing stronger conclusions than is warranted.

There are objective ways in which to judge programming, by considering whether simple rules like the SOLID principles, low coupling, DRY, using composition over inheritance, keeping code complexity measurements low, favoring immutability, statelessness and referential transparency, etc. have been followed.

If any programmer is certain of the superiority of 'his programming style' (singular), then that programmer is insufficiently humble and lacks knowledge of multiple programming styles. Tragically, if you don't know what you don't know, you can think you know everything and can thus speak with confidence on a subject.

One can wonder what comments would be left if you removed all those from programmers with less than 5 years of programming experience, programmers under 30 and programmers with experience in only a single language. My suspicion is that a pretty consistent picture would emerge, with a number of caveats and YMMV's, with an eye for the actual, intended and possible future use cases, the language, the maturity of the project, etc.


I can agree with some of your points; however, I believe you make too many simplifying assumptions.

> There are objective ways in which to judge programming, by considering whether simple rules like the SOLID principles, low coupling, DRY, using composition over inheritance, keeping code complexity measurements low, favoring immutability, statelessness and referential transparency, etc. have been followed.

Hardly anyone on here would disagree that these are good programming practices (or so I think...), but the question becomes: what is the best way to favor immutability? FP solves that problem but many people don't like the all-or-nothing approach. Same with referential transparency.

> One can wonder what comments would be left if you removed all those from programmers with less than 5 years of programming experience, programmers under 30 and programmers with experience in only a single language.

Programmers under 30? Think how many languages exist that aren't even 30 years old! This is one area where I highly disagree. I have found that age matters very little with regard to someone's ability to write code. I have seen code written by "veteran programmers" that is worse than some teenager's weekend project. I'm not denying there is a correlation, but I would say it is a very weak correlation that probably isn't worth mentioning.

I do agree with programming experience, although I'll add the caveat that programming seems to be interesting compared to other subjects in that the rate at which different individuals become better at it seems to vary very dramatically.


  but the question becomes: what is the best way to favor immutability [..]
To which my answer would be: it doesn't matter, as long as you consider it and try to apply/enforce it wherever it makes sense.

  I have found that age matters very little with regard to
  someone's ability to write code. [..]
I was thinking of their greater (life) experience; in my experience they tend to be more nuanced and open to different ways of doing things, which makes for nicer, constructive, conversation.

You're talking about the Dunning-Kruger effect, it's everywhere!

To be honest, I'm very disappointed with the comments here.

Clay is obviously a very talented programmer, and just like most talented programmers, he didn't start out that way. This article isn't about how he learned OOP, it's about how someone opened his eyes to how code can be elegant instead of simply just functional.

This article hits home with me especially. I just learned Python last semester in my intro to CS class. I had an eerily similar moment when I learned how to use generators. "WOW! You can simplify this huge block of code into one line! It's..beautiful." It's a really special moment, and I'm glad Clay shared his.


Those who know this technique either discover it themselves, through "reading, learning, doing", or are shown,like this guy was. None of us was born knowing this. Clay has blogged about his experience of discovering this - that in itself is what makes this a great read for me. I wouldn't be surprised if every single tech book author, at some point after their book has been published, has learned something and thought "I wish I knew that when I wrote my book !!" By their very nature books document the authors understanding at a point in time. Would polymorphism have helped Clay's book ? Probably. Does not knowing it ruin the book ? Probably not. If we waited 'til we knew everything before we wrote a book about anything, nothing would ever get written.

Writing a book is also a great way to discover new things about your subject.

While true, when I purchase a book I'm trusting that the author is an expert in the topic. Maybe my expectations are too high.

You may never know - if it teaches you some things and makes you more productive, then what of it?

My concern is the inverse of that: recommending something you either don't fully understand or haven't used long enough to speak to either its pros or cons. Being in a book usually is an indicator that something is a best practice or how the way things should be done, ostensibly being put forth by an expert. Perhaps we shouldn't hold such content to such a standard.

I checked out the self-published Object-Oriented Design Anthology that this Clay guy ostensibly wrote judging by the discourse in this thread.

Instead, I found that he wrote from what I can tell the only book that introduces RubyMotion, published by PragProg, and endorsed by other authors.

His blog post shows humility, an ability to praise the work of others, and an effort to add value to the world.

All things that these HN comments distinctly lack and, ultimately, punish.


While I understand the point you're trying to make, OOP is a pervasive concept that should factor into building Ruby and RubyMotion apps. My point is if he opts not to use polymorphism in a RubyMotion app, it's an indication from him as the author of a RubyMotion book that polymorphism should be avoided for whatever reason (maybe the dynamic dispatch is too expensive). My assumption isn't that the author simply doesn't understand OOP, but rather that this was a deliberate decision. But, now a hole in his knowledge becomes best practice for many, because as you pointed out, it's the only book on RubyMotion.

I appreciate the humility. I just wish it manifested in a crawl before you can run mentality. There is actual value in understanding the fundamentals.


Authoring a book and being an expert in the topic of said book is, surprisingly, not always correlated. I don't know this particular person, so don't read that as a comment on this particular author. I have met people whose books I have read, I have seen code written by these same people, I have talked with them about the book topic. My take away is that some people are good at writing books, some people are good at writing code, some, but definitely not all, are good at both.

Critiquing these particular lines of code or the pull request is unimportant. My takeaway was that it's hugely valuable to simply make your code available and have other people look at it. Not only can it improve your own code, but you can personally learn something new from other people. You can be sure

Now I'm feeling a bit guilty for not having open sourced any of the code I've ever written, and I'm wondering what kinds of lessons I could have learned earlier on.


I would like to point out that he was able to describe his original implementation in a few sentences. This is important.

Novice programmer: Uses simple if/else.

Adept programmer: Discovers polymorphism. It's great, uses it for everything.

Master programmer: Realizes that in many cases, it makes the code less clear and more verbose, reverts back to if/else until the flexibility is actually needed.

Same story repeats itself many times on different fronts: metaprogramming, recursion, exceptions, macros, etc.


One more step: Master programmer realizes that they're still a Novice programmer - repeats the cycle.

I tried to picture the solution before scrolling down past the initial block with the nested ifs. It got me thinking about "switch()", as seen in C, C++, and elsewhere. That in turn got me thinking about using an enum to pick up on whatever it is he's trying to do here, and jump to the right place accordingly.

Generically, I had something like this in my head:

    switch (row_type) {
      case submittable: ...; break;
      case checkable: ...; break;
      ...
    }
Then I started realizing that using a switch means you have isolated things such that they can only be one of the n types. It won't ever be submittable AND checkable, for instance. I assume that's what is desired here: mutual exclusivity for all of these possibilities.

Looking back at the original code, I see that by using the nested if/else construct, it effectively gives priority to them in the order in which they are encountered. A row with a submit_button is going to fire off make_submit_cell() whether or not it has checkable set. It trumps the possible call to make_check_cell, in other words.

These two approaches solve two different problems. Without knowing more, it's hard to say whether the "priority" approach makes sense here.

If I had to guess, based on the refactoring which turned it into a series of distinct classes, that sounds like it should have been "pick 1 of n" (one-hot code) all along. If that's so, the if block wasn't just ugly, but it was actually covering up some real problems. It was just asking for trouble in terms of bugs down the road since it allowed nonsensical settings.

But that's just a guess. I could be wrong.


Indeed, the original code was, IMO, incorrect.

It looks like the purpose is a pretty common one in iOS - you have a table view populated by many rows, each of which can be a different class that renders and behaves differently. It's a common need in iOS apps.

What he really needed was an enum, not flags. The choice of class is mutually exclusive and flags are horrible for tracking this, and are great for hiding bugs - one erroneous flag flip and suddenly your code is walking down a completely different path that's non-obvious on first debug. Using a bunch of bools to approximate state is error-prone and IMO just bad architecture.


I do something a little bit similar with DOM interaction in JS, for example button ".btn_pdf_docs" shows all ".pdf_docs" elements. I consider the code to implement this more readable than a bunch of events binding.

I like Martin Fowler's Refactoring. This is called 'Replace conditional with subtype' explained here: http://www.refactoring.com/catalog/replaceConditionalWithPol...

The book is a good read and a fantastic reference.


This person found something that they didn't know they didn't know and were so happy they blogged about it. Regardless of what I may personally think about the merits of the code before or after, it makes me happy that there is another human being who is thrilled by learning something new about programming.

I like it when developers praise each other.

Fabio's approach does the job, but it fails at genericity - its usefulness is limited to the scope of the Formotion project. Which is unfortunate because something actually simple is going on - a mapping of keywords to functions.

What you really want is multimethods - they provide all the goodness of classic polymorphism, without forcing you to model intrincate, rigid class hierarchies/relationships.

Some pseudocode illustrating the concept:

    # declaration. the 'dispatch function' we pass can build an arbitrary value out of the args it receives.
    defmulti build_cell lambda { |args| args[1].tag }
    
    # if the value the dispach function generates equals :submit, this body is gonna be called
    defmethod build_cell :submit do
        # implementation...
    end
    
    # more defmethods for :switch, :check, :edit etc
    
    build_cell(row, cell)
    # our lambda gets invoked, which re-passes the arguments it receives to the corresponding implementation
Multimethods doesn't seem to be have been adopted at all in the Ruby community. A quick googling reveals a couple implementations though.

In the Lisp world they are first-class, even though they aren't used all the time. http://clojure.org/multimethods might be worth a read.


Ruby is so unique. Back when I was also surprised by Ruby's reflection and metaprogramming features. So much easier than it was in the languages I had used before.

Nowadays I see people wanting to change Ruby to be more like those other languages. Needless to say, Ruby is best when all of Ruby is available. Putting restrictions on it wouldn't begin to turn it into what some people want.

On the server-side, people can use all kinds of languages, with no restrictions on choices. That's why these languages can flourish. On the client-side we are more restricted. The industry gets to dictate what goes on the client-side more.

Sigh.

I'm trying to use Dart lately which compiles to Javascript. While it can seem a bit like Ruby in OO and dynamic typing, it's a world of difference in other regards. Languages like Ruby that can evolve while breaking backward compatibility have more chance to be useful out of the box.


Can you imagine how much more pleasant participating in HN, GitHub, etc. would be if all programmers were this pleasant and open-minded?

Regardless of the particular merits of the technique he talks about here, I think we can all learn a lesson from this guy about appreciating the work and ideas of others.


I couldn't agree more. The ability to appreciate others' work is a sign of intelligence.

I would not go for polymorphism (specially with all that meta-programming) with this simple conditional.

I liked the article because I feel like I'm at the stage of my career that I need to be a mentor. But at the same time I don't feel like I know enough about everything.

Here is someone writing books and programming iOS apps who can still have his mind blown by, what seems to me, some fundamental concepts. I think there are a lot of people out there doing real work - who can benefit from a little push. that makes me feel that a) I have things I can teach and b) that its ok to not know everything - even what others may think is "basic" stuff.


I was confused as to how a lot of folks took the post to mean I didn't know what polymorphism was...and then I experienced the whole Arrested Development-esque "I've made a huge mistake," probably a little too late.

A lot of folks drew attention to "You can reuse code with inheritance. Absolutely crazy. Brilliant." and took as equivalent to me saying "It's absolutely crazy and brilliant you can reuse code with inheritance." I get that both in- and especially out-of-context that's how it comes off, but it's not what I meant. It was sloppy and I should've been more careful in writing.

I meant it to come off more like "...everything is dynamic and decided at run-time, plus you get a great plugin architecture if you do some polymorphic tricks with these RowType objects. And all of these benefits came from just one simple refactoring. The power of small refactors is absolutely crazy. Brilliant." (I've added this as an addendum to the post)

Like the point of the whole thing wasn't about the refactor itself and if/how the code "rocked my world", but the fact that a refactor could have a seismic effect on the future of a project and the direction it takes.

The addition of many more row "types" wasn't even in the orbit of my thinking without the refactor, and it really helped shape where the project went: there were just those 4 branches in the `if`/`else` tree in this post, but there are now 20-odd default row configurations.

Hope that helps some folks to add more context


Sounds like the open/closed principle of SOLID development. "Objects should be open to extension but closed to modification". This basically means that users of your code shouldn't have to modify your classes to add functionality. It should be done through inheriance and adding new types. http://en.wikipedia.org/wiki/Open/closed_principle

Legal | privacy