Can't decide whether to love or hate this guy. For his young age, he seems to have an impressive skill set, but you can totally tell the douche growing inside
No, a douche would have sold this exploit to an underground group. You just drew attention by show-hacking a big player, preventing other people finding the exploit while this goes unwatched - that's honorable in my opinion.
You found a problem, tried to warn them. They didn't listen, so you showed it to them, without harming anything (aside from a few egos). That's doing the right thing.
You did it under your own full name. For me that's enough to consider your intentions to be good, even though it might've been smarter to try a bit harder to bring attention to the issue in other ways first.
You must not have been your average 18 year old. This guy had read/write to GitHub and potentially many more sites, imagine what damage he could have caused. He may not have done it perfectly but he Did the Right Thing(tm).
A lot of context is lost in text/email/IM also remember that English is not his native language, and he's likely from a very different cultural background.
Give the guy a break.
On balance I think he's just a well-meaning kid who's achieved notoriety the wrong way, but his deepest intentions are good.
Rather like Robert Tappan Morris, come to think of it!
He made an attempt to patch upstream, was shot down, and then proved his point without hurting anyone. Maybe not the best way to get the issue out, but we are talking about it and it wasn't especially harmful. Mission accomplished?
I don't know how else he could have done this. He was being ingored!
He didn't try to "patch" anything -- there was no code attached to the issue he filed -- and he wasn't ignored.
What _really_ happened is that he was told, "no, we think that this is the application developer's responsibility." He became frustrated that the response wasn't what he had anticipated, so instead of acting like a mature software developer he started acting like a petulant child.
The point is, this Github exploit could still exist even if some protections were set by the framework. Developers should take their app's security into their own hands (and I'm sure Github does) by employing a solution similar to attr_accessible.
> He made an attempt to patch upstream, was shot down, and then proved his point without hurting anyone.
Assuming no economic impact to github you mean. If paying users leave because they feel github is no longer safe for private repos because of this, that is harm.
I imagine a bank would be far more grouchy if someone exploited a vulnerability and deposited one cent into someone's account "just to show there was a vulnerability" then publicized it, without talking to them about it first. <sarcasm>No harm done right?</sarcasm>
Now, I am not saying that it is bad that this github vulnerability was found and fixed. I am very glad! But I think it could have been far more responsibly done.
He tried to warn rails about it, and they closed the issue. Then he reopened it using the bug to show that it was an actual vulnerability, and they closed it again. https://github.com/rails/rails/issues/5228
Agreed. I read this thread, he nicely tries to draw attention to the issue several times, and gets the usual "big corporation pushback": Go away, if it was so important, we'd already know.
Then he proves his point, without hurting anyone. In my book, that deserves an A.
Obviously this situation is a bit more complicated, as a ticket was opened up, and a lot of community discussion occurred. In general, emails to the security list are taken extremely seriously.
People are reading this as a vulnerability in rails when it is actually a vulnerability in GH's code. Your comment unfortunately is going to add to it :(.
What should he have reported to rails security team? Shouldn't he have been contacting GH security team instead?
If an app makes it possible to do SQL injections, whose fault is it?
What Rails have done is to have a particular default (whose correctness can be debated) and document how it can be exploited and how to safeguard from it.
You didn't really answer my question. Rails has all the helpers in place to sanitize input for SQL injection. Why in that case do they apply the defaults and not do so in this case? They both amount to making unwanted DB modifications.
When I read "this vulnerability affects tons of Rails apps," I read that as a security vulnerability in Rails. I'm not a user of the framework, but I've heard "convention over configuration" often enough to think that if this was brought up in the Rails issues tracker, it should be prevented by convention.
The problem with this is that the getting started guide[0] uses this 'mass assignment' method in one of the examples (under section 6.8) without any mention of a caveat. The scaffolding does likewise.
You'd be forgiven for thinking there was no vulnerability, given the lack of warning over that sort of code, and the fact that Rails does a lot of 'magic' behind the scenes (especially since you're using their own helper classes to handle form input and such like).
The Rails Guides may discuss it, but this does not excuse the fact that for years the Rails core team knowingly shipped code (and generators that created) insecure by default code.
No, this isn't a "Rails vulnerability" in the traditional sense, but the level of immaturity and groupthink in the response to these issues being reported is staggering and somewhat shameful.
Where would one find that link? The link to "Bugs/Patches" on rubyonrails.org links directly to the GitHub issue tracker. A search on Google for "link:rubyonrails.org/security" finds only a single reference, and it is from someone evaluating how other vendors handle security issues. (My guess: it used to be linked, but then Rails moved to GitHub, and some intermediate landing page was lost in the shuffle.)
OK, here's a specific mistake made by a rails committer in handling this issue. drogus closed [this issue](https://github.com/rails/rails/issues/5239) without bothering to tell GitHub about it. This meant that fewer people saw it, and fewer people had a chance to tell GitHub about it. (One even thought of telling them about it, but unfortunately suggested that someone else do it instead of actually doing it.)
I think that perhaps the Rails team should have someone reviewing that issues were properly handled.
I would expect the guy who found an issue with GitHub to report it to them. Yes, the rails people could have, should have.. But they explicitly asked "him" to report and there is no word on whether he did it or not.
You're stating the obvious. Egor Homakov should have done a lot of things differently. But there is little that can be done about the behavior of bad actors in the rails community. With people on the team, it's different. Practices can be audited, mistakes can be pointed out, and the fine people in the Rails team can respond to criticism and improve their performance.
(1) it has seemingly embarrassed some rails committers into taking this seriously, whereas they dismissed the issue before;
(2) I bet there were at least 20 devs who saw this on HN, said fuck my life, and hopped on their vpn to check if their site is vulnerable.
No drama disclosures didn't accomplish either of these things. Hopefully github won't take it too personally, and Egor was actually (as he seemed!) careful not to break anything.
(And I am not being a hater here, I am also using Rails for some hobby projects & had to double check all my code. I somehow didn't even think about foreign keys in mass assignment... sigh.)
This reminds me of how PHP used to turn HTTP request variables directly into global programming variables by default. Now it only happens when you enable the register_globals option. I don't think I've ever met anyone who didn't consider it a huge security issue.
This rails behavior is actually even more powerful than the old PHP one for hackers because with this you get directly into the model and then the DB when everything is still left as generated, not just the temporary variables. It's actually pretty surprising how much resistance there is to fixing the issue.
It could be that the proposed whitelisting isn't the only solution. It does require annoying configuration. With PHP, nowadays, most people just access a particular array when they want their request variables. Similarly, maybe Rails could have a request model object and a DB model object with simple methods for copying state between the two. Maybe combine it into some sort of validation logic with user friendly error messages being specified. I guess it is still more work that default overwriting of the DB with request variables, though.
I was also thinking of PHP's register_globals. I was tempted to make a snide remark, so I'll make it now. The difference here is that the PHP group realized register_globals was a bad idea, deprecated it in 5.3 and removed it in 5.4. Furthermore the default has been OFF since 4.2.0. The resistance to fixing the Rails problem just makes me ever less likely to give Rails a shot, it should be really bad PR when you ignore security issues.
Inconsistent error handling, for example. Why do some functions fail silently, some functions return false, some functions produce warnings, some functions throw exceptions, and some functions tell you to call another function to retrieve the error code?
Ruby and Python are much more consistent in that regard.
If a field doesn't have any validation rules set it will be thrown out when you save the model. This way you won't mass-assign to a column that was never meant to be mutable. Its a little more work to get up and running, but I think its a good tradeoff.
(You do have the option of turning this off, but you'd have to do it intentionally).
If he'd known that there was something the GH team missed, he should have just brought the issue to them directly. If I realize my neighbor's house is in danger of collapsing because the contractor used the wrong type of wood, I don't bring the issue up with a lumber yard and then knock over my neighbor's house to prove a point when they ignore me.
I agree that it's a problem that many developers aren't aware that they need to protect against mass assignment, but it seems like this dude is totally misunderstanding the entire ecosystem here, and now people are calling him a "hero" because he took advantage of something that everyone already knows.
> If I realize my neighbor's house is in danger of collapsing because the contractor used the wrong type of wood, I don't bring the issue up with a lumber yard
Yes, you do. Because the lumber yard sold your neighbor the wrong wood.
If you're going to make a framework for a language, do so in a manner that discourages stupidity.
PHP learned this the hard way quite a few years back. You never make things easier for the end-user at the expense of security. It's time for Rails to learn the same lesson.
Given the recent story on HN about how former YouSendIt founder had taken their servers down to prove their vulnerability [1] [2], I'm surprised how little reverence these "lol-hackers" (that's going to be my term for them) give to showcasing these vulnerabilities by exploiting them in the real-world and messing with people's real things.
I know as hackers, we feel a duty to show people how serious these things are and that we get impatient and annoyed when ignored. And I also know that it's hard for us to reconcile the idea that when we show the owners rather than tell, it's suddenly considered a crime. But it is.
Let's try an analogy. Door locks on houses are ineffective. Think about it. Your house is covered with windows, which are made of glass. Glass is really easy to break. I mean really easy. If you found out your neighbor didn't have a house alarm, you might talk to them and tell them they should get one. If they didn't get one, would you then break into their house one night and walk into their bedroom to show them how dangerous it is?
OK, who knows, maybe you have a weird relationship with your neighbor. Furthermore, this is an imperfect analogy, because here, Rails and Github are both responsible for other people's property.
But now imagine it's a business across town and that you don't actually know the business owner. If you broke into their business to show them their building's security vulnerabilities, you bet your ass they would press charges and I don't think anyone would blame them. Even if you're doing it with the best intentions, it's still vandalism at best.
All of that being said, this is a very effective way of making your point and getting people to fix the problem. That doesn't make it right. But if you're willing to put yourself in harm's way and essentially become a martyr to get these security vulnerabilities fixed, more power to you.
Door locks on physical houses are just a small part of a much greater security picture involving your community, observant neighbors, other monitoring systems, the police, even dogs. And yes, houses do get broken into causing a massive amount of aggregate economic loss every year.
But on the internet everyone's front door is accessible from even the most remote and hostile places in the world and you're pretty much on your own for securing it. Internet-facing systems also tend hold far more valuable things than your typical home.
For these reasons, I don't think the household lock analogy works very well.
EDIT 2: Hmm, maybe I shouldn't have removed my original response, I just didn't want to derail the conversation upon second thought. For posterity's sake, my original response was something along the lines of this:
---
I think you may be overestimating the effects of "the greater security picture" involving community, observant neighbors, other monitoring systems for the average community. For example, my first startup was RateMyStudentRental.com, and as I recall adding automated motion-activated sensors and lighting actually had a negative effect on break-in protection. As in, adding such a system actually increased your chances of having a break-in. According to the study, neighbors don't actually pay much attention to someone standing at a well-lit door (mileage varying based on the specific community obviously).
---
ORIGINAL EDIT: I won't comment on the technicality of the differences between the analogous situation and the actual situation at hand; the differences have nothing to do with the point being made. Don't get caught up in the analogy, because it's just an analogy.
In case it wasn't clear the first time around, the entire point of the analogy was that, breaking into and vandalizing someone's business just to show them that it can be done is generally a bad idea. Though I don't think anyone would argue it isn't effective.
> [...] automated lighting systems actually have a negative effect on break-in protection [...]
Do you have a source for this? I've always been under the commonly held impression that this has a preventative effect, and it would be very interesting if the opposite really is true.
Good question. It was a long time ago, and I'm having trouble googling for it, because all I get in search results are articles and how-tos by motion-sensor companies selling their products. I'll search though and update if I find it.
Much so-called security lighting is designed with little thought for how eyes -- or criminals -- operate. Marcus Felson, a professor at the School of Criminal Justice at Rutgers University, has concluded that lighting is effective in preventing crime mainly if it enables people to notice criminal activity as it's taking place, and if it doesn't help criminals to see what they're doing. Bright, unshielded floodlights -- one of the most common types of outdoor security lighting in the country -- often fail on both counts, as do all-night lights installed on isolated structures or on parts of buildings that can't be observed by passersby (such as back doors). A burglar who is forced to use a flashlight, or whose movement triggers a security light controlled by an infrared motion sensor, is much more likely to be spotted than one whose presence is masked by the blinding glare of a poorly placed metal halide "wall pack." In the early seventies, the public-school system in San Antonio, Texas, began leaving many of its school buildings, parking lots, and other property dark at night and found that the no-lights policy not only reduced energy costs but also dramatically cut vandalism.
I remembered it incorrectly, as apparently motion-activated lighting helps, but on-all-night floodlighting makes the situation worse.
Peter Gutmann reported some interesting studies in his keynote talk at Shmoocon this year. It's probably available on Ustream or the Shmoocon site.
In short, sometimes darkness helps. Consider a tall glass office building. If all the lights are out, nothing stands out to a security guard more than a flashlight waving around in the darkness.
Do you have any idea how much it costs to clean up after an "intrusion" or "data breach"?
Of course, it's unfair to blame all those costs on the guy who had to go as far as actually escalating his privileges in order to unmask the Rails developers for being such knuckleheads.
As I understand it, for all you know, there were other intrusions anyway and you'd have to go looking for them and clean up after them anyway as soon as you learned that there was an exploit. Whether one dude actually posted something under someone else's name doesn't affect that there has been a huge hole in github for (apparently?) years.
Cleaning up after the breach constitutes of two parts:
1. Changing passwords, removing unauthorized changes/fraudulent transactions, etc. dealing with evil people accessing info that shouldn't be accessed.
2. Notifying clients of vulnerability existence, securing the system properly and changing procedures so that such kind of breach would have less chance of happening again.
In the white hat scenario, the first part does not exist and the white hat does not actually do anything harmful. The second part still exists and can cost tons of money, but this part is not the fault of the person who found the problem.
So while the costs _after_ a benign "intrusion" like was done to Github can be substantial - both in money and reputation, the costs _because_ of it are much less, since most of the costs weren't caused by it, it only exposed the pre-existing need of bear those costs. Like a doctor diagnosing somebody with serious illness - he's not at fault that the person now has to spend tons of money on drugs and medical procedures.
This analogy is just wrong, on the web exists perfect anonymity. Very often you also can hide your traces and nobody will ever find out what you did.
You have to assume every exploit will be abused.
The most stupid thing you can do is attack the white hats, because the next hacker will shove your stupidity down your throat. On a lulz way or on a black hat way.
I would never let my data near a company which doesn't respect white hats, for security and moral reasons. I would remind you that we are on Hacker News, that doesn't exclude the popular meaning of the word.
I agree security vulnerabilities should be addressed by the company as much as they can. But there's no such thing as a perfectly secure system, so there will always be some amount of vulnerability. My only point is that there are ways to communicate with companies and help them fix things without committing a crime and breaking them.
I wouldn't throw a brick at a store window to make sure they have an alarm installed. Likewise, I wouldn't screw with someone else's production system to make sure they have X or Y security vulnerabilities patched. If you're worried about your own data, then try the exploits on your own account, or repository as the case may be. If it's a concern, get a hold of the company and let them know.
You are on the mercy of the guy who found the vulnerability, if you like it or not. This person can have any personality and most of them don't care about your costs. They did your work and invested a lot of time. Most of then don't want money, they want appreciation for what they did. You can give it to them, or they take it and they have the perfect tool for that.
Entirely true. And that person is at the mercy of their governing law and the company's ability to pursue and press charges should they decide to take matters into their own hands, regardless of intent (unless their governing law takes intent into account).
> If it's a concern, get a hold of the company and let them know.
Easier said than done. It's hard enough to find the right people to talk to when people are so paranoid about connecting devs with customers, and if by chance you wind up talking to a PM first, the first thing he does is engage legal, which means instead of fixing the problem and protecting their customers and data, you may now be threatened with lawsuits, have the incident reported to the FBI or someone for investigation (on the assumption that you are in fact trying to extort them or something).
In this world, I'm not sure you can 'win' at this situation at present. Until there are protections for white hats that are broadly recognized, I'm going to side with people like Egor, who find a big damn problem and don't use it for ill.
EFF has a network of people they can put people in touch with, even if for a variety of reasons they can't offer legal help in a particular instance.
But that's not the point. While we can be grateful for the CCC and the EFF for existing and doing the community a great service, we can also believe that their extraordinary contributions should be unnecessary.
That's a very good point about there not necessarily being a right answer. Though of all the companies in the world, Github is one of the easiest to get in touch with their developers.
Maybe I missed something, but from looking through his posts, it seems that Egor felt the defaults in Rails were insecure (let's face it, they are), and then when he didn't get the immediate feedback from the rails team that he had hoped for, he decided to illustrate his point by hacking Github, which happens to be built on Rails. Nowhere did I read, on his own blog or within any of the rails issue tickets, that he actually tried reaching out to Github (i.e. contacting security@github.com) before exploiting the vulnerability.
Maybe he did try to contact Github first, but I didn't see anything mentioning that in any of the linked posts.
Maybe you do need to sometimes pull down someone else's pants to expose a problem. I think that's a moot point because nobody should really be able to disagree with that on pure logic. I think the important question is when someone feels they have no other choice, so they pull down the person's pants, should this be considered an immoral or criminal act? The guy can always say, "Look, I never intended to do any harm, I just wanted to get people's attention to fix this before someone who was intending to do harm came along." The problem is then how do you prove that they really were so benign? I think it'd be easy for a lot of blackhats to hide behind that if they ever got caught, or no? Even if there was complete immunity for people who immediately came out in the open and claimed responsibility for the sake of clearing the air, it'd still be easy for blackhats to hide their rationales in there, while they did something else more malicious (and then if that malicious thing was ever found, the blackhat can easily say, "Hey, I told you guys to fix it!! Now look what happened!! Wasn't me though!".
It's more of a legal grey area I think. If I climb through an unlocked window of a company to enter a restricted area for which I don't have access, and then write (or piss) on the wall, I think it still counts as breaking and entering. Or trespassing and vandalism at least.
I don't think I'm going to take this analogy any further though; things get dangerous when you base an argument entirely on analogy.
It's obviously not "breaking". In fact "breaking" is the difference between burglary and theft, and theft insurance is way more expensive than burglary insurance (which I experienced today reading an insurance offer) , because theft is nearly impossible to prove and usually due to the victim's negligence.
In the legal sense, you don't need to literally break something for it to be breaking in entering.
From the legal-dictionary [1]:
breaking and entering v., n. entering a residence or other enclosed property through the slightest amount of force (even pushing open a door), without authorization. If there is intent to commit a crime, this is burglary. If there is no such intent, the breaking and entering alone is probably at least illegal trespass, which is a misdemeanor crime.
Terrible analogy. More like someone's automatic door lock is opening by itself at random times of the day, and the owner doesn't believe it, so you leave a note on the inside. It's creepy, but not wrong.
but the owner doesn't know that. he has to assume you left a bomb under the passenger's seat, left illicit substances in hard to get spots and put a potato in the exhaust.
So...? The owner has to assume that anyway because they relied on faulty security. Assuming they even care--and honestly why would you think they care now when the evidence is they have not cared about this for at least months.
There's an old debate over what constitutes "gray hat" hacking, and whether it's moral.
I haven't looked at Egor's actions too closely, but he seems fairly close to a "white hat". He's been a bit immature, and so has Github, but there's been no real lasting damage.
Anyone who's written a web app knows that mistakes can happen, and they take time to fix. I'm sure Github will handle the PR crisis with an informative and unbiased mea culpa (even if they secretly think Egor was a bit brash). People will keep using them, because their interface rocks.
This will stay on the front page for about 24 hours, then there will be a few link bait articles that also cash in on it, but that's about it.
If this vulnerability is really due to attr_accessible, then that's got to sting for GitHub as this is a well-known "insecure by default" issue in Rails, and GitHub is probably the most (or one of the most) public Rails app out there.
Would be kind of scary if he had injected some nastiness into the rvm repo master branch, for instance, because I know some people do ride on the master version (`rvm get head`). Or, some gems that are built from Gemfile's pointing at the git repo on GitHub.
Luckily, git itself is quite resilient to attacks on the repo integrity so I don't think there could be much long-term harm done (no rewriting repo history would go unnoticed, for example).
This is not a rails bug, but a github bug. The discoverer reported it in the wrong place, the rails people (correctly) told him to report it to github, he ignored them, and this hilarity ensued.
While this is a legit bug on github, and Egor deserves credit for finding it, he also deserves a scolding for the classic noob mistake of not reporting it to the right place.
That's a matter of opinion. He intentionally reported it against rails because he thinks it is rails' fault for making it so easy to have this security problem. Given that the #1 rails app out there written by professionals suffers from it, and reportedly others do too, he has a point. You can disagree if you want but it's definitely not a "classic noob mistake".
reply