Will have to wait and see if the extra security measures actually improve anything or not.
However regarding it being opt out… what would prevent a virus from just enabling it on a bunch of machines silently. Sure it would be caught but the damage done and most won’t be bothered to go in and disable it after.
Or Microsoft just decides they need to really market the hell out of AI and it gets turned on my default anyways.
There's no such thing as accidental enablement at stuff like this, as if it's switch o single employee at Microsoft can push with their elbow and ends up in production without anyone else noticing.
Either they decide to intentionally enable it or not.
> Either they decide to intentionally enable it or not. There are no accidents , when stuff like this needs to go through a committee of people for approval before it makes it into production.
Absolutely. And all of them decided to screw largely defenseless non-technical consumer to make short-term profits. That's not a fantasy, that's our reality.
Without Recall, an attacker needs to get a program to stay resident in memory to log keystrokes, screen contents, etc. for an extended period of time without getting detected. With Recall, they can get the same end effect by exfiltrating the Recall database file whenever it's convenient (i.e. an infected version of a text editor could send it while pretending to check for updates). This significantly lowers the barrier to entry for getting a victim's data, while also making it much easier to avoid detection.
This is all moot anyway because Microsoft has already said they are now going to encrypt everything behind Windows Hello making it as secure as my password manager.
Microsoft has made misleading statements regarding encryption [0] and it doesn’t help much. Encryption at rest doesn’t much matter if the user being logged in is enough for the data to be decrypted. This is the context malware runs in.
That's old information. This is how Microsoft is intending to change Recall based on these criticisms:
Microsoft will also require Windows Hello to enable Recall, so you’ll either authenticate with your face, fingerprint, or using a PIN. “In addition, proof of presence is also required to view your timeline and search in Recall,” says Davuluri, so someone won’t be able to start searching through your timeline without authenticating first.
This authentication will also apply to the data protection around the snapshots that Recall creates. “We are adding additional layers of data protection including ‘just in time’ decryption protected by Windows Hello Enhanced Sign-in Security (ESS) so Recall snapshots will only be decrypted and accessible when the user authenticates,” explains Davuluri. “In addition, we encrypted the search index database.”
But I'm glad to hear they've committed to making changes. Given the misrepresentations they made regarding the initial rollout plan (the target of most criticism, mine included), Microsoft has to prove themselves here and I'll wait until qualified security folks get their hands on this before coming to any conclusions.
What we know is that the initial version was a non-starter, and this new info validates the concerns we've all been expressing.
I truly hope Microsoft does an acceptable job of addressing this. It remains baffling and worrisome that it took a public outcry for them to implement what sounds like a baseline level of acceptable protection.
> It remains baffling and worrisome that it took a public outcry for them to implement what sounds like a baseline level of acceptable protection.
It's possible this was the intention all along but as a early-beta feature this was just the MVP. The reason it was rolled out to early testers at all was to get feedback.
> It's possible this was the intention all along ... to get feedback
If they're relying on public feedback to realize how completely unacceptable the initial rollout was, that again points to deep problems at Microsoft and is why I'm saying this is baffling.
Microsoft is a big organization with different teams. It wouldn't surprise me if this front-end AI team didn't consider the larger security implications -- having it stored in your profile probably seemed sufficient. It's the same security all your documents have, your browser cache, etc.
If so, that's a problem. It might explain why this happened, but that doesn't mean it's an acceptable practice, especially after recently claiming that security is a primary focus for all project teams.
Security requirements often completely change the architecture of a product. Things can be built without security that are significantly more challenging to accomplish when strict data security requirements are in place. Architectures that assume no security often completely break down when security is tacked on top.
If this is a matter of a product not yet getting "security added", that again raises major concerns about how Microsoft is building products.
It doesn't sound like they're going to have significant problems adding more security to this product. For advanced as it sounds, it's not that complicated of a technology. It's just plugging together a bunch of existing technologies. I could probably MVP this app in a week myself given what is available.
I think that exploratory development is, in general, a good thing. Bogging down all development with middle-management procedures might certainly have caught this early. But that doesn't necessarily make that a better way to build products.
The scary thing about Recall isn't actually Recall itself. It's that AI makes this kind of product possible and really easy. I'm sure we're going to see implementations of this idea everywhere and not just on PCs. Imagine AIs watching security cameras.
Yeah, and places I work with ms stuff keep getting hacked (big important stuff) and leaking stuff and places with linux, or anything else as, far as I can tell don't. Or ar least not in the same number or quantity.
I have never seen a crypto locker ransomware on a server except for windows servers. I haven't seen another OS with ads. So many terroble things happen only in the windows/ms ecosystem that it really makes me wonderhow it sticks around but I have ideas about that and they will just make you think I am wierd.
Virus turns on recall, user might not notice much. A real Microsoft service is running. It can then just wait and activate later. If the user notices recall on, they'll just blame Microsoft. You can then just turn it on again. You can already see that many users are suspect that it'll go back to being on by default sometime in the future too. It's not uncommon to see system updates change settings.
The virus doing the same things as recall will be much noiser and much more suspicious. Making it much more likely to be removed.
Not to mention that once recall has been running a virus only needs to extract the data. It records far more than what a password manager does and is far easier to search through. It just makes a very large attack surface.
Basically, why would anyone develop keyloggers anymore? Microsoft did it for you. And it'll never be tripped by antivirus software because it's an official and legitimately signed program. You don't see a problem with this?
They can't even do their own infra securely, or did you forget a advanced persistent threat entity was in their system and minting certs to access all of azure recently?
However regarding it being opt out… what would prevent a virus from just enabling it on a bunch of machines silently. Sure it would be caught but the damage done and most won’t be bothered to go in and disable it after.
Or Microsoft just decides they need to really market the hell out of AI and it gets turned on my default anyways.
reply