Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That's circular reasoning:

"This feature is dangerous because you cannot trust Apple to only use it for what they say they will use it, hence you cannot trust Apple since they doing something which can be abused"

Using your analogy that would be:

"A knife is dangerous because you cannot trust a criminal wielding a knife to not stab you, hence your friend by definition is a criminal since it wields a knife (she claims she's cutting bread but by definition she's no longer your friend because she broke your trust by wielding a knife which, slippery slope, could be used to stab you if she ever becomes a criminal, but we all know she will because ... the knife...)

all I'm saying is that this whole communication debacle has nothing to do with proving or disproving Apple's trustworthiness: they may be nefarious or not, independently of this feature. This feature doesn't make anything possible that wasn't possible before, and breaks the trust with the users only insofar users misunderstand what it is all about, and apparently that's what's happening.



sort by: page size:

There are two kinds of trust, I may trust Apple to not intentionally steal data. But I may trust Signal to create a more inherently secure messenger, or I may trust Google to create technically a more secure browser.

What Apple and some users here are saying that users don't have intelligence to judge it and so will have to trust only Apple.


You get a little bit of security by trusting Apple. But Apple does not give you the option to also trust someone else.

You trade your freedom against short-term convenience. On the long-term it means that you have to accept all decisions Apple takes, even if you dislike them.


There is nothing which Apple Intelligence can do that a hypothetically evil Apple couldn't have done before, given sufficiently treacherous code in their operating systems. Thus if you use an Apple device, you're already trusting Apple to not betray you. These new features don't increase the number of entities one must place their trust in.

Whereas with apps like Gmail and WhatsApp on an iPhone, you must trust Google and Meta in addition to Apple, not in place of Apple. It doesn't distribute who you trust, it multiplies it.


These risks are valid for any device, it has nothing to do with Apple or the feature in question. The police can have a physical access to your devices with or without implementation of this feature.

I don't understand what are you arguing for, physical access is not a software thing and it is guarded by exactly the same mechanism that guard the access to you apple(the fruit) in your fridge.


You make some very good and interesting points, though I think it's wrong to just presume Apple is lax on security based from that one sentence.

Apple has done plenty to lose my trust, and very little to build it. But that's not really the subject at hand, though I do see where word choice is misleading here.

You just brought up a better word: "liability". I'll go one step further: "attack surface".

When it comes to security in software, we don't need to work with many unknowns. The unknowns we do work with are the attack surface. By presenting a greater domain of unknown behavior, closed source software effectively presents me (the user) a larger attack surface. Sure, I could trust that the extra attack surface is actually covered; but I can't know. With open source, I don't have to trust, because I can know instead.

If I am to choose between open and closed source software, then I am choosing between knowledge and trust. That is a completely different position than choosing between closed and closed: trust vs. trust. So long as any securely-designed open-source messaging app exists, iMessage is at a disadvantage in end-user security. Even if Apple can know for certain that iMessage's attack surface is not larger than an open-source alternative, we the users can't. Closed source software will always present a higher demand for trust.


While your general point is understandable, it's not really that my position is that I trust that Apple knows what's best for me and, if that's what you took away, then you've misunderstood my point. My point is that Apple has engineered their device in a way that guarantees the chain of trust for the hardware components. That is a position that I understand and agree with because there's no way to compromise that position. I also understand that this makes things harder for third-party repair shops but it doesn't invalidate their ability at all so long as they're willing to jump through the hoops necessary to be in that chain of trust. I know what's best for me and what's best for me is a phone that I can trust the components. It doesn't matter to me whether that's Apple or a third-party. I'm not sure how that translates to "Apple knows what's best for me" instead of "Apple has created a device that, in principle, allows for fewer attack vectors". It wouldn't matter to me who did this. It only matters to me that there's no way to bypass this.

It would be the same as me hiring a security company to protect any other assets I have. I can either hire freelancers or I can hire a company that promises to run background checks on all its employees. That doesn't mean they know what's best for me but it does mean that I trust them to not hire someone who's been in jail for burglary.


Apple has other products which could incorporate this behavior (not trusting their customers) like itunes content, iWorks or potentially jail broken iphones. I mean there are various ways for Apple to gain more control over their users.

True, at some point you have to trust someone, whether it's your phone's manufacturer, your telco, or the developers of the apps you use. But when there's a flagrant disregard for users and the potential impact a system like CSAM could have on them, to me that crosses a line and means the company is no longer trustworthy:

> If a company actively screws its users in broad daylight, then what's going on behind closed doors?

At least previously Apple had the pastiche of a privacy and user-centric company. No more if this goes through.


Social engineering will always be a concern. This was as much of a problem as it was 40 years ago, and Apple adding dialogues in the way of it doesn't stop anything, it just changes the path attackers take to reach ultimately the same goal.

If you don't want to understand security, fine: just don't be surprised when your personal information ends up in the hands of the wrong people. Everyone has a duty to understand their own threat model, and any reasonably comprehensive model would include Apple as an ultimately untrusted party. It's a zero-sum game, so they may as well just make it easier for me to use their devices when I'm forced to.


I think the main distinction is that Apple claims to have a secure phone, but not an unhackable phone. A secure vault is hard to get into, but not impossible.

Should they have done something about this? I believe so, but they are not marketing themselves as secure against state actors. They have release lockdown mode, which may or may not have prevented this particular exploit.

It's important to keep the demographic of iPhone users in mind. The average user do not want to be inconvenienced for security measures irrelevant to them. And if a competitor (Android) is providing a better experience, then Apple, from a business point of view, have no choice but to make the most secure system they can, while still providing the same UX.

All that said, I do believe that they should implement zero trust on first contact, as a default, with the option to enable explicit trust for every attachment. I just do not believe that this will be any major impact on these actors capabilities.


My point is that people seem to think that this feature somehow makes it easier for Apple to spy on you.

In fact it doesn’t make any difference at all, since they already have full access to everything you do on the phone, so if you don’t trust them you shouldn’t use an iphone.


> solidify misunderstandings

The key issue here is the very concept of "Apple turns your iPhone into a snitch", and I haven't seen any misunderstanding around that.


The security argument Apple has been spinning for so many years is a fallacy of the worst kind.

Like pedling life in prison, with Uncle Tim as warden, as opposed to the so much dangerous and risky life out there, in the wild, where you can do whatever you want. Like owning a Pinephone.


The argument for (1) fails to make a point in how there's value for the user in having a "fully trusted hardware stack". More specifically I don't see a reason why a user shouldn't be able to disable or re-key the signature validation of the hardware after providing his login and acknowledging the risk.

As far as I can imagine, the only theoretical user benefit comes from being able to protect the device even with user cooperation, assuming he's being forced or a danger to the product. However if it's possible to circumvent the security mechanism anyway through a security bug in the implementation, this becomes a moot point. The consistent availability of jail breaks for iOS shows that the system is inherently insecure and considering the vast attack surface that's not a surprise either. There is no guarantee against the stack having been compromised.

The real leverage from the locked hardware is very much on Apple's side - they force the user into their ecosystem after purchasing the device to apply arbitrary restrictions and extract additional revenue from purchasing applications or media.

Regarding Touch ID, the sensor input (fingerprint) has been shown to be forgeable without professional tools shortly after each revision was released. The technology is insecure out of the box and my original point applies here too. You'd have to argue why the user shouldn't be able to trust the new sensor after authenticating with the primary method (pass code) and how that substantially improves device security.

Liability is btw. hardly a concern, the EU has had implied warranty that doesn't get voided by 3rd party repairs for a long time. Damage from repairs isn't really any different from just dropping the device and thus can be handled equally.

Point (2) is not well justified either. For once every battery has a connector as it's fragile and dangerous - including Apple's. You can't run it through the reflow oven with the main board and you generally don't want to add it to the assembly at the same time. The argument about repeated connector uses is absurd, we're talking about very few replacements in the product lifetime. There are plenty of cheap low footprint options that are specified for at least several uses and almost(?) every battery uses one of them.

There have been high end phone that supported toolless battery swapping in the past without showing huge tradeoffs for it. Going beyond toolless many phones can be fixed without being destroyed in the process - no tradeoff whatsoever. Not using excess amounts of irreversible glue or substituting some with screws goes a long way. It can be assumed that the actual reasoning is more along the lines of not caring to save some marginal cost or deliberately preventing repairs.

Using less glue or a slightly different design is no dead matter, other means are hardly comparable to manufacturing the high tech components. For recycling it is fundamentally important to be able to separate the components easily and any extended lifespan is vastly more beneficial than recycling to start with. The presented argument about any environmental advantage for unfixable designs is exceptionally weak.


Thanks; I didn't see that debate on security and I can see it being a valid point, especially with the burden of a technically-inclined person having to fix others' (family/friends) devices.

I was trying to point to the opinion of the majority that I've seen and what I have seen from most of the community when security is mentioned is that Apple needs stronger security from a technical standpoint rather than controlling what and what is allowed on users' devices. I think both opinions hold validity, and you may be right that there is more nuance.


What's ridiculous is having a semantic argument when you know people's intent. Yes, someone said "locked" and someone said "encrypted", but what they meant was "secure". I think we all get their meaning. They are saying, what good is this if it is not secure? I mean, what good is it? It'll keep your significant other out, but not Apple employees and not the police? Is that the technical-level benefit?

That's pure whataboutism.

Apple is the safest company according to their PR. It has this feature which lets you decide on your security preference. But in fact it doesn't on most important aspects.

And your argument is: that's fine, because others are doing too and more?


A key point that needs to be mentioned: we strongly dislike being distrusted.

It might well be a genetic heritage. Being trusted in a tribe is crucial to survival, and so is likely wired deep into our social psychology.

Apple is making a mistake by ignoring that. This isn’t about people not trusting Apple. It’s about people not feeling trusted by Apple.

Because of this, it doesn’t matter how trustworthy the system is or what they do to make it less abusable. It will still represent distrust of the end user, and people will still feel that in their bones.

People argue about the App Store and not being trusted to install their own Apps etc. That isn’t the same. We all know we are fallible and a lot of people like the protection of the store, and having someone to ‘look after’ them.

This is different and deeper than that. Nobody want to be a suspect for something they know they aren’t doing. It feels dirty.

next

Legal | privacy