Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This is an interesting and genuine case where I can see that it doesn't make sense.

In your case it should be possible to have a windows "branch" with just critical vulnerabilities patched with binary diffs or something rather than 680-odd Mb patch sets.



sort by: page size:

Potentially. However that also leaves a larger attack-surface, in the sense that more patches = even more scrutiny. Conceivably such a scheme would require all parts to be included (I mean, why make it larger than necessary?), so it would be extremely fragile to any single maintainer seeing the potential for abuse and correcting it.

In your scenario, Microsoft is making a probabilistic statement ("a bet") that there will not be many future changes to the code.

That is, the binary patch path is more expensive than code patches, because it requires more specialized skills. At some point, with enough patches, it would have been worthwhile to update the source and rebuild than to modify the binary.


Could you make the argument (not here) that binary patches are not a reasonable way of distributing a patch for a program that is frequently updated?

it's not only binaries that are patched. Data files, which can be of multiple gbs are patched too. You can either delta patch them, requiring significant cpu time or replace them entirely. Most devs prefer the latter

Another reason, with small patches, especially those that actually do a binary patch instead of replacing affected files, you must apply every single patch that affects that file, and you must apply them in the correct order.

Back in the day when binary patching was the norm, this was a common pain point. Good patches at least did a minimal checksum check to make sure they had the correct starting point. Bad ones made no backups and just went to town, and failures meant reinstalling and trying all over again.


This is done to reduce the impact of vulnerabilities before a patch is out. Pretty common and sensible procedure.

Hard work and reverse engineering I would guess.

The also make it sound like they also patch some non-windows software. I'm curious what the quality of these are. It seems fairly easy (if expensive) to binary patch most code most of the time but I would have to expect that there is a non-trivial rate of crashes or incomplete fixes with this approach. But maybe better than nothing?


There's only two reasons I can think of why they'd patch the binary directly: either they've lost the source-code, or they no longer have an environment they can build it in.

Expecting every developer to be able to patch a binary is, to put it politely, unrealistic.

The world of software engineering is enormous, and knowing all parts of the software stack hasn't been remotely feasible for a very long time.

Also consider the fact that software engineering is a journey of learning, and every one of us is on a different part of that journey.


Binary patching hasn't been a thing for years unfortunately.

Even bigger wreck on consoles these days, where an update can be 50+ gigabytes...


Usually, to patch a program, a binary patch is enough. Decompiling and recompiling is hard (even that's what we do at rev.ng!) :)

the binary patches are very useful for security patches, like the mentioned Heartbleed problem.

Let's imagine just one example of patching a remote hole in a Windows server. First, you have to stage a duplicate of an old server with a new patch, which can take days. A production environment may need significant development effort just to integrate the patch, which takes days. Then run all tests and QC processes against it, which can take days. Then you can deploy it during a maintenance window. This is 1-2 business weeks.

Now multiply that times 1,000 different combinations of versions of Windows, applications, networks, platforms, and so on.

You're not just patching "servers", anyway. You're patching bare metal machines, hypervisors, AMIs, container images, software packages, plugins, network applications, security policies. Often vendor platforms don't even have a patch available so you have to implement a custom workaround, if one exists.

One could write an entire book about this subject. Please believe me, it's not simple.


Disaggregating security patches from feature sets would vastly increase the permutations they would have to test.

A lot of it is untested legal area. A maintainer might choose to reject such patches out of precaution rather than spend time & money on a lawyer trying to clear it up (with no guarantee it will be correct).

This is bad and should be fixed, but there are fairly few circumstances where it actually creates a new vulnerability. The majority of uses of patch are applied to source code by someone who's going to end up running that code anyways, so applying patches you haven't read closely from sources you don't trust is already unsafe.

It would be interesting to see what percentage of meaningful security vulnerabilities couldn't be fixed via live patch. Of course that does require that you invest the significant effort required to get live patching to work, but it is possible.

TL;DR: patching is hard, so use defense in depth to minimize/mitigate the impact of individual components becoming insecure from time to time.

No disrespect meant, but from a security perspective the idea of patching security-critical software with a patch from a stranger on the Internet is kind of crazy, isn't it?
next

Legal | privacy