Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Sure, but so could the OS or app distributors. You need to establish a baseline of trust somewhere, and this will likely be on the official (or your cherry picked official) images, and you build from there.


sort by: page size:

I think you'd need device levels keys. You couldn't trust any particular image ... but you could perhaps know where it came from, which you gives you a better substrate upon which to infer trust.

As I noted elsewhere, I don't doubt that many of us could, and I'd expect HN users to do better on average than developers overall, given the number of security conscious people here.

But that's not what I'm questioning, but whether or not homegrown images on average are going to do better. Look at the non-official images, and see how much nonsense is in there.

If you know you can do better, by all means do. For many of us that is the best option. And I absolutely wish there was more focus on more secure practices for the official images too. But I still think the official images are likely to be better than what most developers would cook up.

Doesn't mean it's good. Just better than the (terrifying) alternative.


A lot of the concern is that others may not trust the NCMEC, or that they don't trust that other images won't be added in whatever ends up on your device.

Apple wouldn't be able to tell if it's Whinnie or a polical organizer, because they'll only be provided with hashes of the original images. Any government participating would be able to include any image it wants without Apple's immediate knowledge.

That lets you trust the image. Trusting software which is actually running somewhere seems harder.

I agree that building your own images is the future. We want to make this as easy as possible. I think it is essential for security.

Doesn't that presume that the base image upon which those in-house images are based is also trusted? Don't get me wrong, I'm not trying to be chicken little, but I don't think saying "we only publish our own images so we're ok" skips the authenticity problem.

Running your own registry is probably the case you had in mind.


The legal risks of an app like that would be huge. Identify mushroms for a picture can be really tricky and I wouldn't advise it.

That is a really good point. I wonder if official images will be treated differently.

Yeah I have no doubt they would use the data gleaned from the images. All the data generated from processing the screenshots to be useful to the user would be just as useful to third parties. More useful than the screenshots themselves, in fact. I haven't heard anything about promises to keep the extracted information locally.

Imagine for example that you’re a high-ranking politician. You have staff and PR departments and the whole entourage. You take photos regularly and you want those photos to be public to keep building your brand. It’s just a fact that the public won’t fully trust you if there’s a smear campaign, but you have the originals of everything and that’s very powerful if the public can validate them as original.

I’m sure we already have the technology needed to build the tools do that validation, I just don’t know what the winning choices are yet.


If the photo looks "fake", sure. I think a decent photo would work great - maybe one of the founders or from users?

Any nontrivial image relies on a large number of other images as a base, and just because you built it yourself doesn't mean that you didn't just download and install software with known security vulnerabilities.

Yes. Same as any organisation who is sent a copy of the screenshots by the user.

Trusted hardware running remotely-attested trusted software can capture imagery that can be highly assured to be real.

Trusted timestamping, when correctly implemented, completely prevents the generation of imagery after-the-fact. Any false imagery would need to be pre-prepared prior to an event in time, or generated in real time. And a trusted device would need to be exploited to "record" it.

PKI allows individuals/organizations/governments to cryptographically associate their reputation with specific pieces of content.

There will always be the possibility of exploitation of trusted hardware. Apple's eternal fight against the jailbreak scene is evidence of this. However, the combination of these techniques would go a long way towards making the production and distribution of fake imagery difficult.

But, as already stated, the actual problem is societal. People don't really care that much, and enjoy living in their echo chambers.


Yes, but that won’t work since innocent looking photos won’t match the visual derivative.

That's what the company claims, and maybe it's right, but how do we know? Do we have access to the source code? Will it change in future? You can be absolutely sure that if the company thought they could make more money by storing the photographs or somehow associating them with real identities then they would.

If a primary goal of a consumer of the images is security, how can we trust the images not to have backdoors or virusesesses [extra s added for comedy]?

And you are supposed to trust images not to have vulnerabilities?
next

Legal | privacy