Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Will have to wait and see if the extra security measures actually improve anything or not.

However regarding it being opt out… what would prevent a virus from just enabling it on a bunch of machines silently. Sure it would be caught but the damage done and most won’t be bothered to go in and disable it after.

Or Microsoft just decides they need to really market the hell out of AI and it gets turned on my default anyways.



view as:

It will be re-enabled accidentally by an update anyways.

Please stop with the unmercenary assumptions.

There's no such thing as accidental enablement at stuff like this, as if it's switch o single employee at Microsoft can push with their elbow and ends up in production without anyone else noticing.

Either they decide to intentionally enable it or not.


I'm not sure the use of 'accidentally' was sincere. But I like this choice of words in your post in your first version:

> unmercenary assumptions


Yet despite all that I've witnessed accidents still make it in production...

I think OP forgot the quotes around "accidentally". You're right it won't be a true accident; it will be intentional and just called an "accident".

> Either they decide to intentionally enable it or not. There are no accidents , when stuff like this needs to go through a committee of people for approval before it makes it into production.

Absolutely. And all of them decided to screw largely defenseless non-technical consumer to make short-term profits. That's not a fantasy, that's our reality.


Yeah, but like I said, that's by intention, not by accident. How does your comment disprove my point?

Or by intent - it seems I was reading about an early proof of concept attack that turned Recall on and hid a systray indicator that it was on.

"accidentally"

What would prevent a virus from directly stealing the data it wants without going through this feature?

Just like in biology a virus can be simpler if it can co-opt existing machinery.

I agree, the ability to take screenshots is unsafe and should be removed. A virus is just a PRT SCRN away from stealing everything! (/s)

You realize sneaking in code to arbitrarily exfiltrate user data is much simpler if a trusted source (in this case Recall), is doing the collecting

Without Recall, an attacker needs to get a program to stay resident in memory to log keystrokes, screen contents, etc. for an extended period of time without getting detected. With Recall, they can get the same end effect by exfiltrating the Recall database file whenever it's convenient (i.e. an infected version of a text editor could send it while pretending to check for updates). This significantly lowers the barrier to entry for getting a victim's data, while also making it much easier to avoid detection.

> Without Recall, an attacker needs to get a program to stay resident in memory to log keystrokes, screen contents, etc

Or it could just steal your cookies which are out there in the open.


Cookies are of relatively low value compared to a database of everything the user has typed and seen.

What value is that? My auth cookies are far more valuable than anything I typed out in the open today.

Your auth cookie expires.

The username/password you type in next time it expires is far more valuable.

And it might not even be necessary to obtain cookies or credentials if I can just see whatever you could see when you’re logged into various sites.


This is all moot anyway because Microsoft has already said they are now going to encrypt everything behind Windows Hello making it as secure as my password manager.

Microsoft has made misleading statements regarding encryption [0] and it doesn’t help much. Encryption at rest doesn’t much matter if the user being logged in is enough for the data to be decrypted. This is the context malware runs in.

https://doublepulsar.com/recall-stealing-everything-youve-ev...


That's old information. This is how Microsoft is intending to change Recall based on these criticisms:

Microsoft will also require Windows Hello to enable Recall, so you’ll either authenticate with your face, fingerprint, or using a PIN. “In addition, proof of presence is also required to view your timeline and search in Recall,” says Davuluri, so someone won’t be able to start searching through your timeline without authenticating first.

This authentication will also apply to the data protection around the snapshots that Recall creates. “We are adding additional layers of data protection including ‘just in time’ decryption protected by Windows Hello Enhanced Sign-in Security (ESS) so Recall snapshots will only be decrypted and accessible when the user authenticates,” explains Davuluri. “In addition, we encrypted the search index database.”

https://www.theverge.com/2024/6/7/24173499/microsoft-windows...


"Old" is a bit of a stretch here ;)

But I'm glad to hear they've committed to making changes. Given the misrepresentations they made regarding the initial rollout plan (the target of most criticism, mine included), Microsoft has to prove themselves here and I'll wait until qualified security folks get their hands on this before coming to any conclusions.

What we know is that the initial version was a non-starter, and this new info validates the concerns we've all been expressing.

I truly hope Microsoft does an acceptable job of addressing this. It remains baffling and worrisome that it took a public outcry for them to implement what sounds like a baseline level of acceptable protection.


Well it's not "old" since the article is about Microsoft's blog post where they discuss all these changes!

https://blogs.windows.com/windowsexperience/2024/06/07/updat...

> It remains baffling and worrisome that it took a public outcry for them to implement what sounds like a baseline level of acceptable protection.

It's possible this was the intention all along but as a early-beta feature this was just the MVP. The reason it was rolled out to early testers at all was to get feedback.


> It's possible this was the intention all along ... to get feedback

If they're relying on public feedback to realize how completely unacceptable the initial rollout was, that again points to deep problems at Microsoft and is why I'm saying this is baffling.


Microsoft is a big organization with different teams. It wouldn't surprise me if this front-end AI team didn't consider the larger security implications -- having it stored in your profile probably seemed sufficient. It's the same security all your documents have, your browser cache, etc.

They clearly did not consider the larger security implications. That is both the point and the problem.

This points to structural issues at Microsoft.


Maybe security oversight happens later in the process. No need to bother with that if the feature doesn't even work.

If so, that's a problem. It might explain why this happened, but that doesn't mean it's an acceptable practice, especially after recently claiming that security is a primary focus for all project teams.

Security requirements often completely change the architecture of a product. Things can be built without security that are significantly more challenging to accomplish when strict data security requirements are in place. Architectures that assume no security often completely break down when security is tacked on top.

If this is a matter of a product not yet getting "security added", that again raises major concerns about how Microsoft is building products.


It doesn't sound like they're going to have significant problems adding more security to this product. For advanced as it sounds, it's not that complicated of a technology. It's just plugging together a bunch of existing technologies. I could probably MVP this app in a week myself given what is available.

I think that exploratory development is, in general, a good thing. Bogging down all development with middle-management procedures might certainly have caught this early. But that doesn't necessarily make that a better way to build products.

The scary thing about Recall isn't actually Recall itself. It's that AI makes this kind of product possible and really easy. I'm sure we're going to see implementations of this idea everywhere and not just on PCs. Imagine AIs watching security cameras.


Why would someone trust microsoft on security?

HN is a weird place. 95% of the world runs on Microsoft technology to some degree. (95% also runs on Linux to some degree as well)

Yeah, and places I work with ms stuff keep getting hacked (big important stuff) and leaking stuff and places with linux, or anything else as, far as I can tell don't. Or ar least not in the same number or quantity.

I have never seen a crypto locker ransomware on a server except for windows servers. I haven't seen another OS with ads. So many terroble things happen only in the windows/ms ecosystem that it really makes me wonderhow it sticks around but I have ideas about that and they will just make you think I am wierd.


What if Microsoft is the party I do not trust?

Virus turns on recall, user might not notice much. A real Microsoft service is running. It can then just wait and activate later. If the user notices recall on, they'll just blame Microsoft. You can then just turn it on again. You can already see that many users are suspect that it'll go back to being on by default sometime in the future too. It's not uncommon to see system updates change settings.

The virus doing the same things as recall will be much noiser and much more suspicious. Making it much more likely to be removed.

Not to mention that once recall has been running a virus only needs to extract the data. It records far more than what a password manager does and is far easier to search through. It just makes a very large attack surface.

Basically, why would anyone develop keyloggers anymore? Microsoft did it for you. And it'll never be tripped by antivirus software because it's an official and legitimately signed program. You don't see a problem with this?


> what would prevent a virus from just enabling it

If that occurs, the malware won't have access to months or years of data to sift through.


Yet.

Malware that scrapes it and malware that turn it don't need to be the same.


> Or Microsoft just decides they need to really market the hell out of AI and it gets turned on my default anyways.

This is what will happen. And when you turn it off again, it'll be turned back on by the next update. Enjoy.


They can't even do their own infra securely, or did you forget a advanced persistent threat entity was in their system and minting certs to access all of azure recently?

Legal | privacy