Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I don't think it should be surprising that few users actually look at the code or are willing to dig into a foreign code base. It's still true that open source makes it possible, which is a huge step up from any other model. There's a lot of backlash right now due to some very high profile bugs that have been around for a very long time. But would those bugs have been found if the programs hadn't been open? Also, look at what happened after heartbleed: Another group of people decided to dig into the openssh code base and try to clean it up, finding and fixing lots of other issues, without any authority from the original authors. That's the benefit of openness, in my opinion.


sort by: page size:

So if it wasn't open source, how would you go about discovering Heartbleed? It was discovered independently by multiple people beacause it was open source - so contrary to your naive assumption, people are looking at these things.

Here is a story how it happened and how it was patched - http://en.wikipedia.org/wiki/Heartbleed#Discovery

Are you really arguing that discovering and fixing Heartbleed would be simpler and faster in a closed source form?


While you can argue that some piece of open source software can be more insecure than a proprietary alternative, auditing a piece of software requires access to the source code and that is mandatory. And with open source everybody can audit with no restrictions. Yes, OpenSSH is a piece of shit, but how do you think it was discovered, from 2 independent parties no less.

Then there's another effect that I like - after the initial patch was released, the story went public, we got notified immediately, then we could discuss about what caused it and see the actual commits and who did it. Such a catastrophe can sink a company, therefore you never see such post mortems for proprietary stuff. And yes, even I as a developer cannot audit software for security, but the point is that I could hire somebody else to do that for me, like the Finnish company that discovered Heartbleed.

So yeah, there is no concrete proof that proprietary stuff is less or more secure than open source, but the point is that we'll never know, because nobody can know how secure something is without looking at the source code.


You've obfuscated a good point. Open-source is a necessary, but not sufficient condition for openness. Actual human beings must also understand the source well enough to audit it and modify it. If heartbleed occurred in closed source, it would still be an active problem. The vendor would be reluctant to even admit the flaw, because it makes them look bad. And the whole world realized that one guy was maintaining OpenSSL, and he was on the edge of poverty. It was a wake-up call. Thousands of devs looked at the code, and understood it well enough to patch and fork.

It's also true that, because of historical accidents, we have several more examples of https://xkcd.com/2347/. However, that's not an argument against open source. It's an argument that we all should take ownership of what we ship, all the way down, without exception. An open CPU definition is a necessary, but not sufficient, requirement for this level of ownership.


Unless you have personally read, fully understood the security implications of, and compiled from source every application currently running on a Linux machine, including the kernel, then you really have little guarantee of safety in open source, and probably not even then.

Most Linux users are not experts in any of the multiple languages that their applications might have been written in, and so are neither capable, nor likely willing, to pore over millions of lines of polyglot code just to verify that the latest version of Iceweasel doesn't have an NSA backdoor (or what have you.) Therefore, the blind trust most Linux users place in whomever wrote their distro and their applications is no different than the blind trust Windows and OSX users place in the companies that make their operating systems and any binaries they download from third parties.

Granted, in the open source case, anyone can look at the code, and the axiom that "with enough eyes, all bugs are shallow" does sometimes work. But it doesn't necessarily follow that no one at Microsoft or Apple is paying as much attention to their code as the masses are to open source code. It's just an assumption on the part of the open source community, that open source necessarily leads to greater code coverage because the code is available to everyone, versus a subset of employees at a company, and that it's harder to hide things in plain sight when the source code is available. This assumption of course turns out sometimes to be spectacularly untrue, as it was in the case of Heartbleed.


But if the code is open source, we _can_ verify it's security?

Heartbleed and Shellshock, the two most significant vulnerabilities found in heavily used open source software, were found by vulnerability testing and not code inspection. So while being open source is a nice-to-have attribute for a piece of software, that's as far as it goes. Painting open source as being a magical wand that wishes away all our security troubles is completely out of order.

Edit: I'll go further. It's become dismayingly apparent that very little systematic code review of open source software in order to secure it is actually taking place. It now seems quite possible that the most thorough investigations of software vulnerability, via code analysis or any other techniques, are carried out by those wishing to exploit them. They are well funded and highly motivated. Looked at in that light, the balance may well tip towards open source actually increasing the likelihood of software vulnerabilities being exploited maliciously.

The open source community has a long way to go if it's going to clearly demonstrate that it's model is advantageous, and complacent pronouncements of it's assumed superiority like this aren't going to achieve that.


Yeah, free software isn't a development model and it doesn't mean it's made by amateurs. As another prominent free project with a much better track record for security and stability, consider OpenSSH, the crown jewel of the OpenBSD crowd, a free software distribution itself popularly renowned for its security. Theo's criticism here of OpenSSL carries a lot of weight.

I understand that people love open source, but how is that relevant here? For example OpenSSL is open source, yet it didn't prevent Heartbleed and other exploits from happening?

I also think that open source is better than closed source. Nothing to argue about.

What I was wondering when I read the same sentence you quoted: how many really serious security bugs like Heartbleed, CVE-2008-0166, or the zx drama are happening without people finding out about it and publishing their findings?


Open source projects that have a high usage and high visibility have their flaws quickly fixed _because_ the code is open and for everyone to see. It's an iterative process that grows with the popularity of the project. Popular open source projects have been open for years and/or have lots of people that participate, review, report bugs, and fix them.

A closed source projects can have thousand of security bugs no-one will never know about. Since it's closed, it's hidden so no problem... until it's leaked.


I think issues like the heartbleed bug demonstrate that open source is not necessarily a credible defense against either incompetence or malice.

Open source only makes bugs shallow if lots of people are using the software and reviewing the code. OpenSSL is a great example of code that lots of people ended up using but not reviewing that had/has many issues.

I'm not saying anything negative about open source...I'm just saying that its more complex than just having code available.


Open source still has its security benefits. When an exploit is publicized, the "fix" for it can be highly scrutinized, and people can comment meaningfully on whether it actually fixes it, and can contribute their own. I remember looking at diffs and the accompanying analysis when heartbleed was out, and it was informative and reassuring.

When an exploit in closed source software is publicized, we have to take their word for it that it gets fixed.


Too simplistic of an answer, though it could be part of it.

I think we wrap ourselves in a bit of false security when we say something is open source and think that automatically makes it more secure. We assume someone has looked at the source. But has anyone really? And those with the most incentive to look into these things might not be inclined to share the vulnerabilities back to the community for safety's sake, given the princely sums being offered by companies like Zerodium.


I might be completely wrong but I don't see much difference between open source and closed source in this case. If I were biased I might say that it might have taken substantially longer to discover such a bug in closed source software but that seems to be as much sensible as saying that open source has less security bugs because more people look at the same code.

In the end software is written by people and people make mistakes. I'm pretty sure there are a lot of software be it open or closed source that had absolutely terrible security bugs. Judging open source as a whole by looking at a single project sounds a little bit like overgeneralization. Also, they are obviously in need of help though, I hear a lot of complaints from people about the OpenSSL code.


People tend to cite heartbleed as an example of how "open source" has no intrinsic security value. While I agree that something being "open source" does not automatically make it secure, I think that it does allow us to gauge the security of the project.

In the case of heartbleed, anyone who has ever spent more than 30 seconds looking at OpenSSL can tell you that it is not secure software and never will be. We can't identify and remedy every possible vulnerability, but we do have enough information to know that events like heartbleed will happen again and again.


Arguably vulnerabilities would be more likely to be found and disclosed if the code was open source.

What consensus? All I'm seeing are assumptions that being able to read code easily means it is more secure. Someone can easily write an open-source project that looks like it is a secure project but can easily be misused to do bad things without people catching it in the act.

Look at Heartbleed and how lack of funding led to a horrible bug being missed. There has been other open-source projects hit by similar issues. Just as there are closed-source projects being hit with their own issues.

The nature of the code license does not, and I stress this strongly, lead to anything being more secure than other solutions.

A properly funded and talented team of developers working on an open source project is just as secure as a properly funded and talented team of developers working on a closed source project.


You appear to be unaware of the large industry reverse-engineering software of all sorts. You could compare comparable projects and see whether source availability correlates with fewer vulnerabilities, lower severity, etc.

Similarly, the security community has discussed the possibility of intentional vulnerabilities in opensource software for decades. Sure, someone would probably notice if you submitted secret-nsa-exploit.patch but it's unclear that someone would notice if e.g. you submitted a Heartbleed-style bug, not to mention something the NSA's dual curve backdoor.

To be clear, I've been working with open-source software since the mid-90s. I think the model has a lot to offer but it's not magic. Lazy fanboy activism doesn't do anything but lower your credibility and help the companies which are arguing that open-source isn't safe to use (or isn't safe to use without paying them to manage it).


There are plenty of companies who have released their source code but don't support it in the same way a typical community driven project like other open source projects do.

This is especially true for certain privacy and security focused applications. For example, Signal release their code, have quite a lot of users, and don't report an unmanageable overhead due to having released their source code.

It's not just a matter of trusting their intentions, it's a matter of knowing that their code matches their intentions. I trust OpenSSL (mostly, these days) and I always trusted the intentions of the developers, but if their code was not open it would not be half as secure today.

next

Legal | privacy