Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

You could also measure non programmers (they lack bias) and get real numbers on whether adding parenthesis helps.


sort by: page size:

I don't see how these numbers would be more real than measuring programmers at work, unless you intend to help non-programmers write programs.

The article is too much anecdotal. What is a "better programmer" compared to the average Joe developer? How do I measures this?

Then you are using a metric that doesn't tell you much about the quality of the programmers.

Agreed that an objective measurement would be ideal and this is second best.

However, we've had a hard time creating or locating an objective measurement of general programming skill. As demonstrated by the industry's non-use of standardized tests for programming, I don't think anyone else has either.

If you have a lead on something, would love to see it.


Of course it can be quantified otherwise hiring based on skill level for a software role would be impossible. You can look a examples of code and know someone's level and then deduce their performance based on output and quality of code. The problem is NON-technical people are the ones who want to quantify your performance! Which has been my problem for years in the software industry. If you haven't written code in years, you will be lost as to whether I'm writing good software or not, yet these are the people who rise to the top to manage the technical people and then do a horrible job because they have no idea what to optimize for.

Yes? I am asserting that you are overestimating the value of the programmer, which is one axis of the correlation I suggest.

Why can't both be true? If you actually had some objective measure of programmer skill, wouldn't you expect there to be some kind of bell curve of skilled programmers, with 10x and 0.1x people somewhere at the tails?

I agree, although I think it’s more easy to correct for this bias to generalize to all of SO than to all programmers.

Trying to create an accurate measurement of programming skill is far more difficult than looking for correlation on tests compared with some other subjective measure.

This misguided attempt at instrumenting "skill" could prevent highly capable individuals from entering the field who don't have a solid math foundation, for societal or other reasons.


I don't think their is any quantifiable data to prove what either of us say... but I think what I say applies to most programmers, not all.

if combined with other data, it could show how inefficient/efficient programmers are

> Is there any reason to measure by percentage?

It adds friction all over the place. It's more difficult for a company to find the competent programmers, more difficult to find good books among the dreck, more difficult to judge which product is excellent and which isn't (without significant effort, in some cases). Rating systems can (and will be) abused, and finding the needle is harder when the haystack is larger.

Looking at hiring, a company might have to spend considerably more effort sorting through the "bad" programmers. There will be false positives+negatives in their search. How much damage will be done to the "good" products and the productivity of the "good" programmers because someone was misjudged when they were hired? How many skilled and passionate but less marketing-savvy programmers end up working for the quantity-over-quality companies?

If we had cheap, fast, and reliable ways to actually gauge the quality/skill of media/people/whatever, then more options would always be good, even if they're of widely-varying quality. As it is, we don't, so the extra options are something of a mixed blessing.


> As a proxy for developer performance, intelligence is strongly correlated.

I think sampling some code from the applicant could be a better indicator.


I'd suggest there is no quantifiable way to measure what "a good programmer" is.

Nor is there a way to quantify "good code" in any scientific manner. It IS possible to identify obviously bad code, BUT "good code" comes in a variety of forms and looks different depending on the experience level of the developer.

In the end, many of the measures that people apply to assess programmers are simply a matter of personal opinion.

edit: downvoters - if you think I'm wrong then please explain your counter opinion, don't just downvote. A downvote without a counter opinion simply confirms my assertion that there is no science to this because you won't identify what that science is, you have only a vague opinion that I am wrong.


Don't be insulted; it's just an argument on the internet. Besides, the correlation people like me are claiming is at most a statistical pattern with lots of room for variation. (You sound like a good programmer regardless. But what I find endearing in your comment is how you go and spoil your argument by hacking on an iPhone app in your spare time - and feel obliged by intellectual honesty to fess up to it!)

Of course, you might also mean that the debate is insulting because it's pointless. I can understand why it would seem that way. What does it matter whether someone programs outside of their job or not? It doesn't, intrinsically.

Here's why I think the subject is interesting anyway: there's no cheap way to tell who the really good programmers are. This is not just a hard problem, it's a hard problem that seems like it ought to be an easy one. Why can't you just devise a test? People have tried to figure this out every which way, spending many millions, maybe billions in the process. And no one has a good answer. That's genuinely surprising.

Given that situation, any criterion one can use as a (cheap) proxy for "good programmer" is of interest, even if it's imperfect.

What makes the "hacks in spare time" correlation with "good programmer" interesting is not that it captures all of the good programmers. There are certainly good ones who don't do that; i.e. the test does produce false negatives. What makes it valuable anyway is that it excludes the overwhelming majority of bad programmers, the ones for whom it's a job that they aren't very good at and aren't motivated to get good at, who form 90% if not 99% of the professional population. In short, the test is interesting because it excludes many more false positives than it admits false negatives.

A corollary is that all the people who protest "I don't code in my spare time, yet I am a good programmer" aren't really adding much data to the discussion. The ones we need to hear about are the bad programmers who do. :)


A small sample size would just imply greater deviation. There's no reason to assume the programmers outside your sample would be x10 better than those in your sample.

Well A) How do you plan on quantitatively measuring that so that all candidates are equally considered? and B) What does that have to do with writing software?

This form of argument always seemed disingenuous to me. After spending some time with a programmer, especially in a work environment, it's easy to get an overview of how strong their skills are, and average would mean either the average of your employees or the average of similarly-positioned employees in general, neither of which is hard to have a handle on after being in the industry for a bit of time.

Just because you can't with current technology write a program to give an output from 0 to 100 grading a metric doesn't mean that the metric is impossible or even difficult for a human to evaluate. Leave your metric at "a heuristic evaluation of the employee's overall effectiveness" and the only way to "game" it will be to be a better employee.


Not to be an apologist, but you still have to account for programmer skill and number of programmers. I'd be interested in seeing the average and median for different programming languages, not just the top results.
next

Legal | privacy