Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Google has a secret browser hidden inside the settings (matan-h.com) similar stories update story
914 points by matan-h | karma 808 | avg karma 18.36 2023-06-26 06:13:03 | hide | past | favorite | 327 comments



view as:

TIL mobile JavaScript console https://eruda.liriliri.io/

I found that really interesting too. I've often wanted something like that!

Used to use Firebug Lite to get the equivalent in IE6: https://web.archive.org/web/20141217201617/http://getfirebug...

This is awesome. I hope there is a way to auto load a script so I can make some simple extensions.

I'll sometimes go through the trouble of using `data:text/html,<script></script>`, but that is impressively replete.

That's mega. Doesn't work for me in Brave on iOS, but does in Safari, I'll take that.

There even is a "remote version" of this by the same person: https://github.com/liriliri/chii

By using it you can open the devtools on another computer and all the information is synchronized over WebSockets. I used it once to debug an issue on a customers machine.


A bookmarklet would be nice. This is how the current developer tools in browsers started (Firebug).

Oh, Firebug! Blew my mind when I first saw it. No more View Source for me!

Bookmarking included in the project README…

https://github.com/liriliri/eruda#demo


Thank you! I must have expected an embed code and missed the javascript: part.

Is it just me or does this completely not work? When I paste the Javascript snippet into the address bar, nothing happens. And in Nightly it just performs a Google search with that string.

Try putting it in a bookmark and then execute it.

How is this supposed to work? Opening the bookmarks page navigates away from the current page. Even then, selecting the bookmark does nothing.

Works in Firefox, not Chrome. Android.

1. Bookmark any page, making a dummy

2. Menu > Bookmarks > edit

3. Change URL of dummy bookmark to the js bookmarklet code.

4. Visit any site.

5. Menu > Bookmarks > tap on the bookmarklet

6. Widget appears on bottom right of page

It doesn't work on HN(?) But does work on other sites.


If you're on Android and want eruda, I've got a userscript to load it on every site here: https://github.com/Efreak/UserScripts/tree/master/Eruda-Mobi...

It helps with things like removing elements because you can see the DOM and it's fewer clicks away and easier to use than ublock, which doesn't show the DOM in the little box provided for element removal and only allows removing one item say a time (you can use multiple selectors, but every time you tap an element to get the selector it overwrites the existing content)


Unless I'm mistaken, Chrome mobile doesn't support userscripts (or extensions). Which browser are you using?

Step 5 onwards don't work on private tabs for some reason. For private tabs you can do all steps up to 4 and then:

5. Tap on the URL bar

6. Type part of the name of the bookmark you chose until it appears in search (in my case eruda works)

7. Tap the bookmarlet

For this to work you need to have bookmark search enabled in settings: Settings -> Search -> Search bookmarks

Also, there seem to be many sites where the widget doesn't appear, but you can try it at google.com.


Same. My phone just got a whole lot more powerful!

Is this different than any other embedded webview? Doesn't nearly every app somewhere have an embedded webview somewhere for things like "view privacy policy", where it is often much easier to display html than sending the whole privacy policy to your app developer?

Yea, I think that's all it is.

The webview appears to have privileged JS functions for password manager key management and recovery.

> appears

Until someone confirms that they are what the name and what the speculation is about.


> Is this different than any other embedded webview?

Yes - it exposes an API to set device encryption keys to the websites that you visit with it- At least that's how I interpret the last section "The dangerous functions".


Do normal embedded webviews also bypass parental controls? If so, that seems like a massive issue.

Normally they are fixed to one domain.

Right, but if I've banned youtube.com in parental controls, it'll still load in, say, a Mastodon client with an in-app browser for opening links?

Embedded webviews were the easiest parental control bypass on Windows since about Windows 98. I've played so many flash games through the documentation for Microsoft Word!

I can't find any information anywhere that either confirms or denies the possibility to bypass Google's restrictions with web views. I assume it's possible, because it's possible on most platforms, but I suppose it depends on the implementation.

I've seen parental controls that employ an (on-device) MitM proxy and DNS filtering to ensure safety, and those apps will prevent almost any app from displaying unwanted content.


If any app that has an embedded webview allows to bypass parental control, then this is an even bigger bug in Android…

(without even talking about this key management stuff, because at this point it's merely speculation as the author didn't test what they actually do: “you have two methods which I don’t know what they do, but they sound scary”)


Can you visit arbitrary websites using such webviews? I never managed to.

And IIRC it's rather difficult to set up a webview that allows multiple domains or URLs (but I'm no android dev, and the last time I had to fiddle with this, was years ago)


The reason it works here is that this particular webview opens a Google page that links to Google.com. There is no address bar so any safe browsing enforcement will make it at least two steps harder to access most had content.

Blocking external domains shouldn't be that hard, but I also don't think parental controls are of any interest or priority for most app developers.


Last time I fiddled with it, was when we moved domains and our webviews stopped working. They i) did not follow the redirects we had in place, and ii) did not allow loading the new URL without whiltelisting that domain/url somewhere in the source-code.

IIRC whitelisting was the default in webviews; not sure if it still is, or if our expert Android dev configured it this way, but even getting a build that allowed to load content from our new domain required a new build. (Let alone that someone, even if we had links or such in our about.html, would be able to navigate there).


Please use "allowlist" instead of "wh_telist".

huh? why? is there a bot checking for this word? will it get someone in trouble?

It'd be pretty simple to enforce sandbox/parental controls for the integrated webview browser.

1. Just limit the webview browser location to the same list as allowed by the parental control.

2. By default limit the webview browser location to the domain first opened by the app i.e locked to a single domain by default.

3. Allow webview browser to be expanded via a regex/pattern list of domains.

4. Limit the number of webview browser location changes so even if you can access a search engine with a global domain allowlist, it would just return to the first page after N window.location changes.

There's plenty of introspection you can do via JS (which is already being used to set/inject that `mm` object), it could even check for certain DOM elements, HTTPS fingerprint, etc. to determine if the page is an "intended" destination for the particular integrated webview browser.


you forgot the reason webview exists! advertisement click's attribution.

It's possible. I remember one app that opened a webview to their terms of use page, which somewhere had a link to a Google page, which I could use to go to Google search. So, no direct URL input, but you could go to any website indexed.

Or find a website like the mobile JS console people mentioned in this thread to link yourself anywhere, indexed or not

IIRC webview, by default, requires a dev to whitelist domains. Maybe that has changed, IDK.

But finding an example where you can navigate elsewhere is not proof that all webviews are broken; maybe they have this "security issue" by default and allow a dev to tighten it (bad sec. practice IMO), and maybe android versions or SDK-versions differ in how they adhere, IDK. But the times that I encountered this and fiddled with it, it was a PIAS to even allow loading a page from another domain.


That's exactly what this is. It's Android System Webview, the embedded browser that apps use when they aren't a browser themselves.

Sorry, but how is this news then? The Google settings have never felt native and therefore were almost certainly a browser for a very long time now

Why does it have to be news? Someone wanted to share what they found, that’s what personal blogs are for. Nothing more.

Well, that neither news or “secret”. Nice find though

It probably was news for many of HN readers. Hidden would be a right term, I suspect that English isn't Matans native language

correct

Well, "news" is defined by the upvotes on HN ¯\_(?)_/¯

"Hacker News"

Yeah to be honest the presence of this article on the Best section of HN makes it look a lot more interesting than it really is. It's just a WebView...

I used to be able to do the same thing in iTunes years ago as a laugh. I don’t see how this is a huge deal.

Parental control bypass is the bigger issue here, kids will do anything to get around parental controls and Google made a promise when they set up parental controls that it was secure and would prevent your children from accessing things you didn’t want them to. This breaks that promise.

When a child is powerful enough to start taking control from you it might be time to start giving it away.

That's true. It's also true that it might not be.

It should be noted that once this technique makes it to the playground, every kid will learn about the magic taps that make the web available, including kids that aren't ready yet.

Obviously, parents using parental control will have questions what their kids are doing for hours in the Google Settings app every day, but every kid will probably get that day or week of free browsing until their parents get suspicious.

That assumes parents bother to check on the statistics made available by parental controls, of course; if nobody checks, then the kid will access the web unrestricted for years.


I suspect my kid found a way to use Spotify to browse the web, because he seems to use it all day and when I ask him what he listens to he just says "stuff".

On the other hand, what kid wants to talk to their parents about what music they listen to?

What kid _doesn't_ want to talk to their parents about the stuff they're interested in? My kids have introduced me to some really interesting music, and vice verse.

I shared my dad's love of like Jethro Tull and Talking Heads, but uh there was also plenty of music I wouldn't want to talk with my parents about. There are endless examples in any era of popular music.

I checked that, and it's actually possible : Spotify ? your plan ? see available plans ?click on some plan ? click "advertising" ? find "reCAPTCHA" and click on "Google Privacy Policy"?scroll down until "google" ? you get it :)

Or even decades

I haven’t used Android in some time (and never with parental controls), but is it possible to access this view at all of you’re under parental control restrictions?

This is a really strange comment for a technical message board. Is there a similar promise that there are no exploits in Android that can be used to circumvent parental controls?

There are web views in all manners of apps and tools. No parental controls are watertight.

While I can see your hesitations, I think the solution is rather straight forward: block the Google Account settings behind parental controls. I don't think you'll want your kids logging out of their parental controlled accounts so they can create new ones anyway, so that's probably a good idea regardless of the we webview they can trick into opening Google.com.

You'll find webviews inside most apps because your average weather app developer isn't really interested in preventing kids from using their privacy policy webview to access porn.

I don't get it myself (why not just launch the default browser instead of adding a webview?) but I hope you'll see that these types of workarounds are not unique to Google's settings.


Google's increasingly cavalier attitude towards security is concerning:

1) Kids WILL use this to bypass parental / school controls as soon as they learn about it

2) In some contexts (especially as high-stakes test settings, but also some military/prison/finance/medical/legal/etc. settings) this IS a direct security risk

3) Given the embedded browser is not secure, if a lot of kids do this, it WILL lead to someone exploiting this, and machines being compromised and escalations

At Google scale, if 0.001% accounts are impacted by a security vulnerability, that's still tens of thousands of people (you can do the math too). I don't think engineers at Google quite have a perspective on what it means when their decisions (not just security) ruin thousands of lives.

What's astounding is just how good Google's security team was, especially in comparison, maybe 15 years ago. Now, it increasingly reminds me of the path Yahoo took.

Critically, issues build on each other and escalate. Most remote root exploits require overcoming multiple layers of security. Defense-in-depth is important. Google used to address issues when a single layer was breached, before they could combine into someone remotely rooting your phone. Now, Google only fixes security bugs only after they've combined into a severe remote exploit (which often means many devices are compromised before an update goes out).


This is a pretty standard kiosk breakout technique, which have been super common since the 90s. They have always existed, and will continue to exist. The impact and use cases for issues like this are pretty negligible, so they don't get addressed as quickly as bugs that can actually be used for real crime.

Also, you say the embedded browser is "not secure", yet the going rate for browser bugs on Android are in the multi-million dollar range, especially if it leads to root.


There are plenty of ways to invade someone’s privacy without being root. Stealing a Google Account would still be a prize.

At least Mozilla is still around to find all their bugs for them.

> 1) Kids WILL use this to bypass parental / school controls as soon as they learn about it

Good. Parental/school controls don't belong on the device. They belong on whatever the device connects to.

That would be parental/school networks.

If you don't want your kids to connect to things then don't let your kids have devices that connect to things.

> 2) In some contexts (especially as high-stakes test settings, but also some military/prison/finance/medical/legal/etc. settings) this IS a direct security risk

The direct security risk is using Google in the first place.

> 3) Given the embedded browser is not secure, if a lot of kids do this, it WILL lead to someone exploiting this, and machines being compromised and escalations

There's nothing in that statement that relates specifically to kids.


How would network restrictions help, there is WiFi at friends houses / everywhere.

> There's nothing in that statement that relates specifically to kids.

Most adults probably don't have parental controls on their phone...


> How would network restrictions help, there is WiFi at friends houses / everywhere.

> > If you don't want your kids to connect to things then don't let your kids have devices that connect to things.


Comment said the restrictions belong on the network being connected to, which is useless when you can connect else where.

Yes not giving your kids access to a device is one option; parental controls are another.

I'm not sure either will work entirely, but that's another point.


And locking down the primary device a kid has access to creates the lure of the forbidden; that won't last any longer than their access to another device without those restrictions. That also creates an environment where if they do find something they really ought to be able to talk an adult about, they can't talk to a parent about it because they'll be in trouble (and get their friends in trouble).

One of the widely studied aspects of child psychology is how to instill guidelines that last even when they're elsewhere without any enforcement other than self-enforcement.


Sure, as I said:

>I'm not sure either will work entirely, but that's another point


> How would network restrictions help, there is WiFi at friends houses / everywhere.

Aren't there other Internet-connected devices at friends' houses too?


Yes, however, locking your children in a Faraday cage is likely to be frowned upon.

Right, so device restrictions are useless.

> If you don't want your kids to connect to things then don't let your kids have devices that connect to things.

This is not an option as school, at least in my region, requires devices directly since 4th grade and indirectly even earlier for homework.

Devices move between networks so having controls directly on the device is helpful.

Your argument seems like arguing that there should be no local access permissions on files and just let the network handle everything.


> Your argument seems like arguing that there should be no local access permissions on files and just let the network handle everything.

Quite the contrary. Your local files are given to you by your local device. It's up to your local device to ensure that those files are properly access controlled.

But things on your network are given to you by your network. It should be up to your network to ensure that those things are properly access controlled. It should be up to you to ensure that you don't connect to networks which don't have proper access control.

> school, at least in my region, requires devices directly since 4th grade

If school requires things then school should provide things.

> indirectly even earlier for homework

Homework should be done at home. Are you saying that you don't have control over which devices on your network are able to access which things online? You should fix that.


> If school requires things then school should provide things.

A lot of things should happen in life. That doesn't make it so.


> If school requires things then school should provide things

They do. What’s your point?

My point is that parental controls are useful because kids must use devices. So protecting kids while using those devices is important.


Security belongs on the endpoint. How do you know there aren’t malicious or compromised devices on the school network?

Security means protecting against malicious incoming messages. Not user-initiated outgoing messages.

Visiting malicious sites can't harm a properly working devices.


> Google's increasingly cavalier attitude towards security is concerning:

> [3 bullet points unrelated to security]

Security is a field related to protecting device-users from malicious actors. Your 3 examples all fall broadly under parental-controls, which are about controlling & monitoring a user's use & access of their device - a scenario within whichc the user is the adversary, not external actors. That may be an important or necessary measure in some contexts but classifying it as "security" is misleading.


> Security is a field related to protecting device-users from malicious actors.

You know - sometimes, just sometimes - it is also to do with protecting organisations from careless or malicious users. The three points are related to security, even it couched in terms of parents/children


There are lot of much easier ways to compromise security both for careless or malicious users. This is the fundamental difference between ios and android. If you want you could ruin the security of android, however it is harder to do it in ios. Definitely not impossible, you could sideload dangerous apps easily in ios as well.

> There are lot of much easier ways to compromise security both for careless or malicious users.

So what? There can be multiple ways to compromise security and it’s not like we only solve the easiest ways and leave the rest.

While there are easier ways today, when those get patched this will one day be the easiest.


I think you misunderstood me. Android deliberately allows users to hack into their own phone and remove its security. It allows users to install malicious apps if they want to or even root the phone entirely.

So there is nothing to solve or patch here. You could get ios if you want user to not have that power(even there it isn't very hard to install malicious accessibility app through sideloading).


I would hardly call disabling a security feature in the settings or getting an authorization key from the vendor hacking into your own phone. These are features that allow users who (think they) know what they are doing do what they want to do. It is intentional and people can figure out the consequences by doing some research. That is in start contrast to finding an undocumented hole in security.

> sometimes, just sometimes - it is also to do with protecting organisations from careless or malicious users.

There are two cases where this is true: a user intentionally sharing internal access with external malicious actors, or a user unintentionally sharing internal access with external malicious actors (e.g. social engineering / general incompetence). Neither apply to kiosk breakouts.


You seem very sure that those are the only two security risks to an expected browser being available on an otherwise managed device. I'm pretty certain there may be other risks.

One can absolutely make an argument for a great many risks to be classified under security concern: there are certainly more than just these two. Doing so is simply reductio ad absurdum.

To expand on this, we can if we choose classify all parental controls under general access control, and within a principle of least privilege further classify the following as legitimate security risks: - access to the internet - access to a keyboard - read access to a disk

There are absolutely scenarios one can contoct where these are real concerns. The settings panel of a general-purpose consumer device doesn't fit that venn diagram for me. Is it a bug: yes. Is it a security bug: no.


Please take this as critical feedback, and not as a personal attack: The comments which you are making here suggest that you shouldn't develop any software which in any way touches personal data without significant upskilling on IT security. You're making false comments with complete confidence.

Most security scenarios came about as a result of attackers being able to bring systems into absurd situations, and moving systems through unintended pathways.

"Reductio ad absurdum" could apply to most digital exploits before they've happened. "Why would the system get into that state?"

That's a key difference between physical security and digital security:

- In a physical situation, I need to worry about what a typical criminal trying to break into my home or business might do. That requires reasonable measures.

- In digital security, I need to worry about what the most absurdly creative attacker on the internet might do (and potentially bundle up as a script / worm / virus / etc.). I do need to worry about scenarios which might seem absurd for physical security.

If you engineer classifying only "reasonable" scenarios as security risks, your system WILL eventually be compromised, and there WILL be a data leak. That shift in mind set happened around two decades ago, when the internet went from a friendly neighborhood of academics to the wild, wild west, with increasingly creative criminals attacking systems from countries many people in America have never heard of, and certainly where cross-border law enforcement is impractical.

I've seen people like you design systems, and that HAS led to user harm and severe damages to the companies where they worked. At this point, this should be security 101 for anyone building software.


Seems like an argument about system-driven and component-driven risk analyses - they both have their place, and they're not mutually exclusive. Risk-based approaches aren't about either removing all risk or paying attention to only the highest priority ones. Instead, they are about managing and tracking risk at acceptable levels based on threat models and the risk appetites of stakeholders, and implementing appropriate mitigations.

https://www.ncsc.gov.uk/collection/risk-management/introduci...


It's a slightly different argument. The level of "reasonable risk" depends on the attacker in both situations.

The odds of any individual crafting a special packet to crash my system are absurdly low.

However, "absurdly low" is good enough. All it took was one individual to come up with the ping-of-death and one more to write a script to automate it, and systems worldwide were being taken down by random teenagers in the late nineties.

As a result of these and other absurd attacks, any modern IP stack is hardened to extreme levels.

In contrast, my house lock is pretty easy to pick (much easier than crafting the ping-of-death), and I sometimes don't even remember to lock it. That's okay, since the threat profile isn't "anyone on the internet," but is rather limited (to people in my community who happen to be trying to break into my house).

I don't need to protect my home against the world's most elite criminals trying to break in, since they're not likely to be in that very limited set of people. I do any software I build.

That applies both to system threats and to component threats. Digital systems need to be incredibly hard.

Google used to know that too. I'm not sure when they unlearned that lesson.


Do you think there’s a standard for “incredibly hard” that all applications need to follow? Or that it varies from one application to another depending on context?

It depends on context. There are many pieces here:

1) Cost of compromise.

- For example, medical data, military secrets, and highly-personal data need a high level of security.

- Something like Sudoku high scores, perhaps not so much.

2) Benefit of compromise. Some compromises net $0, and some $1M.

- Something used by 4B users (like Google) has much higher potential upside than something used by 1 user. If someone can scam-at-scale, that's a lot of money.

- Something managing $4B of bitcoin or with designs for the F35 has much higher upside than something with Sudoku high scores.

3) Exposure.

- A script I wrote which I run on my local computer doesn't need any security. It's like my bedroom door.

- A one-off home, school, or business-internal system is only exposed to those communities, and doesn't need to be excessively hardened. It's more-or-less the same as physical security.

- Something on the public internet needs a lot more.

This, again, speaks to number of potential attackers (0, dozens/hundreds, or 7B).

#1 and #2 are obvious. #3 is the one where I see people screw up with arguments. Threats which seem absurdly unlikely are exploited all the time on the public internet, and intuition from the real world doesn't translate at all.


If I’m reading you right, if a business had a non-critical internal system (internal network behind a strong VPN) with the potential for a CSRF attack, you wouldn’t call that a risk?

It's a risk.

Having is having glass windows (at least at street level).

Whether it's a risk worth addressing depends on a lot of specifics.

For example, a CSRF attack on something like sharepoint.business.com could be externally exploited with automated exploits. That brings you to the 7B attacker scenario, and if the business has 100,000 employees, likely one of them will hit on an attack.

A CSRF attack on a custom application only five employees know about has decent security-by-obscurity. An attacker would need to know URLs and similar business-internal information, which only five people have access to. Those five people can just as easily walk into the CEOs office and physically compromise the machine.


>it is also to do with protecting organisations from careless or malicious users.

What about protecting users from careless or malicious organisations?


If you (in this case parent) block something (in this case browsing porn sites) on some software (in this case Android device), it most definitely _is_ a security issue if the user (in this case a child) can bypass the restriction you imposed. I don't understand what's not clear there? If your phone is locked with a pin and you pass it to your friend (Stifler) because his mom just called you, he should not in any circumstance, be able to unlock your phone without knowing the pin code. That's the first issue. The second security issue is the possibility of any website calling private internal Android functions for (potentially) setting encryption keys of your device (!!!) You don't consider this a security issue?

Related, I used to root my old android phones by going to rooting websites that would do it all in browser.

[dead]

> which are about controlling & monitoring a user's use & access of their device - a scenario within whichc the user is the adversary, not external actors

Access control falls squarely under security. Also, the user should be considered the adversary, because they or programs that run on their behalf might be malicious, either knowingly or unknowingly. Not accounting for this is one of UNIX's biggest blunders.


The user is generally never the adversary in any legitimate security situation. Ignorance might be but that’s not something inherent to the user and an area for improvement.

Privilege escalation is a typical class soft security issues.

The device owner (parent, school, etc.) set restrictions, which some other user bypasses.


Right, that’s a parental control scenario.

A Linux box with a root user and an end user and the end user can run things as root without root authentication—is that also parental controls?

Could be an organizational need like medical files and HIPA.

> The user is generally never the adversary in any legitimate security situation.

First, this isn't correct, for instance, DRM and TPM.

Second, "the user" does not have direct access to the computer internals, which means all such access is mediated by programs that are supposed to act on the user's behalf. But because software is not formally verified, we have no guarantee that they do so, and so we must assume that any program purporting to run on the user's behalf is intentionally or unintentionally malicious. This is where the principle of least privilege comes from.


> > The user is generally never the adversary in any legitimate security situation.

> First, this isn't correct, for instance, DRM and TPM.

You must have missed the word "legitimate". DRM and TPM are two of the best examples of illegitimate "security".


First, that's a matter of opinion. Second, it's still wrong per my second point.

If you don't think DRM is illegitimate security, then what do you think is?

It still falls under security, obviously, which is why I listed it. Whether you like it or not is irrelevant.

The word "user" is ambiguous.

There are two kinds of relationships between an "user" and a computer.

The computer may belong to the employer of the "user" and the "user" receives temporary local access or remote access to it, in order to perform the job's tasks. Or the computer may belong to some company that provides some paid or free services, which involve the local or remote using of a computer.

In such a context, the main purpose of security is indeed to ensure that the "user" cannot use the computer for anything else than what is intended by the computer owner.

The second kind of relationship between a "user" and a computer is when the "user" is supposed to be the owner or the co-owner of the computer. In this case the security should be directed only towards external threats and any security feature which is hidden or which cannot be overridden by the "user" is not acceptable.

Except perhaps in special cases, parental controls should no longer be necessary after a much lower age than usually claimed, as they are useless anyway.

I have grown up in a society where everybody was subjected to parental controls, regardless of age, i.e. regardless whether they were 10 years old, 40 years old or 100 years old.

Among many other things that were taboo, there was no pornography, either in printed form, or in movie theaters or on TV.

Despite this, the young children, at least the young boys, were no more innocent than they would be today given unrestricted access to Internet. At school, after the age of 10 years, whenever there were no adults or girls around, a common pass-time was the telling of various sexual jokes. I have no idea which was the source of those jokes, but there was an enormous number of them and they included pretty much everything that can be seen in a porno movie today. The only difference between the children of that time and those who would be exposed to pornography today was that due to the lack of visual information both those who were telling and those who were listening did not understand many of the words or descriptions included in the jokes.

So even Draconian measures designed to "protect the innocence of the children" fail to achieve their purpose and AFAIK none of those boys who "lost their innocence" by being exposed to pornographic jokes at a low age were influenced in any way by this.


In this case the “user” is in part the person granting controlled access. The person moving the mouse is not the user in total.

Take a easier example an atm machine. If a person touching it can access accounts/remove money, there is no question about it being a security problem.


Yeah, it's important to make a distinction between the "user" and the device owner. Often those are the same person but not always. Treating the user as an adversary can be okay in some circumstances, but treating the device owner as an adversary is never acceptable in my opinion.

The whole problem with security is that it's often difficult to tell whether all steps of what are happening now align with the device owner's true intent--

* Is it the device owner providing the direction to do this?

* Will the input being consumed as a result of this direction result in actions that the device owner approves of?

etc.

A kind of blanket assumption that everyone and everything is the adversary is a good starting point. The system needs to protect itself, in order to be able to faithfully follow the owner's instructions in the future.


Someone on an ATM accessing accounts other than their own is a security problem. Someone on an ATM accessing youtube is not a security problem.

I'm not so sure. It could be considered a DoS if nothing else, and throwing porn up on an ATM screen could certainly cause a company enough problems that they would consider it a security problem, and if you can load youtube on an ATM you could probably also load a different site with a fake ATM screen that collects pins and/or other personal information (account numbers would be more difficult unless you have a way to access the card reader) but any full featured browser in an ATM capable of being instructed by an attacker to load the attacker's JS is very likely a major security issue waiting to happen.

Being able to display whatever you want on an ATM machine is absolutely a security problem. I could put a fake PIN prompt, a prompt to enter the card number because the reader is broken, whatever. This comment sections is blowing my mind, and is a great example of why dedicated security teams are required, in the world of software.

You're assuming they have control of a lot of the screen, and that they have access to the keypad. Or even that they can get to sites other than youtube. Please don't assume the case that makes my post the weakest.

Your mind is blown because you're reading way too much into my hypothetical.


> Or even that they can get to sites other than YouTube.

Playing a video directing the user to call a number would be enough to trick some people. Enabling social engineering is a security problem.

Security is minding the specifics, which requires not assuming things are ok. That's why red teams exist, and why the default assumption of "it's not ok" is the correct assumption. ;)

We'll find out if the specific case in the article is a problem or not, once people look at it very closely. We may not have this luxury with our hypothetical ATM, though.


I'm not here to make assumptions. I'm here to point out "being able to open a web page in a context like that is not necessarily a security problem", and I'm sure you can think of an example if you don't like my example.

With an existence proof, you only have to worry about the narrowest possible interpretation. The skill of considering what an exploit could lead to is very important, but it fits oddly into such a hypothetical. Finding a possible flaw doesn't invalidate an existence proof unless you also can't think of a way to mitigate it.

Also if the video is small and says youtube and tricks a user I'm not sure I would call that a security problem. You can trick users with a post-it note, and that doesn't mean there's anything wrong on a technical level.


> You can trick users with a post-it note, and that doesn't mean there's anything wrong on a technical level.

Sure, but something present on the screen of a trusted system is very very different than a post it note. This claim is why I'm sticking strong by my assertion that this is why red teams exist. That's a really baffling view of security, to me.


You're still not listening to my real point.

Pretend it can only play a rickroll, no other videos. Or I could come up with something more reasonable like "it only does top trending and you need to hold keys down so if you don't have skimmer-level tech to shove in you can't persist the exploit" or whatever.

I'm saying there's some scenario where it's not a security issue.

You don't need to prove that there are scenarios that are security issues. That's obvious.


Excluding parental controlls from 'security' feels like more of an idealogical stance than a practical one.

I can see the argument based on Free Software principles. But I don't see anything else. There are so many cases of devices that are facing a user but not owned by the user which very much do fall under 'security'. Public terminals are a big one, devices handed out to employees in certain cases are another, and esoterica cases like prisoners also exist. Those should very much count as security, if only because 'when something breaks dangerous things can happen'. Then excluding parental controls because 'censorship bad' doesn't make much sense, since parental controls and other device lockdowns are often implemented with the exact same methods.

There are plenty of eviler things like a locked-down secure-boot and TPM grounded DRM that definitely fall under security, that I don't think it makes sense to gatekeep the term.

Heck, security as a term is so often used oppresively, that it makes little sense to gatekeep it anyway.


The role of the user as adversary is complicated, but it includes things like unintentional and coerced or duped actions. The desired behavior is to protect the user from their own mistakes or victimization. Some of the concerns GP raises overlap with security. In secure programming, the threat model always includes "user error".

Ya, I could not upvote this more. Honestly physical security is usually one of the biggest fail points you will see in security audits. I also agree that there is nothing wrong with viewing users as potentially adversarial. I guess some of these responses surprise me, is all. I urge any sysadmins working with physical servers to reevaluate their access controls.

> there is nothing wrong with viewing users as potentially adversarial

There is a world of difference between considering users adversarially (social engineering is the most common threat vector bar none) and considering kiosk escape a serious threat.


Defining your way out of giving secure, as in safe, devices to kids is frustrating. And sadly reflective of exactly why the original comment is correct.

Well said. Spoken like a true Google engineer! However, I think you understand both security as a field, at least one of my three points, as well as children and parenting.

===================

Security as a field

===================

You wrote: "Security is a field related to protecting device-users from malicious actors."

This is a very narrow and incorrect definition. Security as a field relates to many things, including for example protecting confidential information. If my medical information is handled by a hospital, I would like to know that information does not land on the dark web. In order to do this, the hospital needs to implement processes which protect my information from nurses being socially-engineered, doctors installing spyware, and countless other threats.

This is handled in-depth:

- Personnel handling my sensitive data should be screened.

- There should be technological restrictions on the devices preventing both malicious actors and errors

- There should be training in place

- There should be appropriate legal safeguards (NDAs, employment agreements, etc.)

- And so on.

Managing confidential information involves having managed devices. In many cases, these are also in physically-secure facilities and intentionally kept off-line. They don't belong to the person using them.

=========

Bullet #3

=========

One of the points in the original article is that the embedded browser has "a weird JavaScript object named mm" which appears to be used to handle things like security keys. This is a security issue in the narrow sense you've defined. If my child (and many other kids) uses this to bypass parental controls, their device is likely to be compromise by a malicious actor if they browse to a malicious web site.

========

Children

========

You described kids as "a scenario within which the user is the adversary"

I don't know if you've ever interacted with young kids before, but they're not so much the adversary as oblivious and clueless. Before they're teenagers, most are sweet, charming, and WANT to do the right thing. However:

- They have no idea what a "buffer overflow attack" is, let along phishing and other standard scams

- They're very easy to socially engineer. If you're a Random Adult, and ask them for a password, and give a stern look, they'll probably give it to you.

- They have no idea of the kinds of malicious actors on the internet. If someone tells them "To enable Angry Birds, go to this special dialogue," they might very well do it. There are online videos of malicious actors tricking little kids into e.g. washing their devices in a sink, or sticking them into a microwave purely for the LOLs. Mean people do these things to kids.

... and so on.

The reason to control and monitor what little kids do (not just digitally; the same applies to kitchen knives, fireplaces, and swimming pools) has very little to do with treating them as an adversary, and a lot to treating them as little kids who need an adult to help them learn.


That's a nice long reply - I'll try and keep my response a bit shorter.

You (and many many of the replies in thread) have taken the initial topic (kiosk escape -vs- parental controls) and are defending their definition as a serious security threat by likening them to social engineering attacks on medical staff. These are separate scenarios with separate threat models. If your child is sending confidential corporate data to malicious third parties through the Android settings app you may have a separate set of problems beyond software controls.

Overall, much of the finer details of yours & others' replies amount to an extreme level of theoretical pedantry around technical classification of threats, completely removed from any kind of real-world analysis of their severity.


I'll keep it short too: You don't understand what (many) people are trying to explain to you, and are coming to dangerously incorrect conclusions. If many people are giving you the same feedback, it should trigger something in your head, but somehow, it doesn't. You don't even seem to be trying to understand or considering the fact you might be missing something, so I'm giving up.

Please do not ever build systems which ever touch any sort of critical data or which work on consumer devices outside of a sandbox until you've picked up basic clue about security.


This could also be considered a sandbox bypass. A device/application is given a limited set of capabilities to ensure that if something does go wrong, the affected area is small and well known. This effectively eliminates those safeguards and provides a gaping hole that most systems designers would think had been closed vis other configuration. As others have pointed out: kiosks, schools, prisons, POS, the check-in device at a Dr. office, and any other managed device have a reasonable expectation yo behave as their admins have configured them for the sake of not necessarily the person who has the device, but also the person sitting next to them that they could possibly effect by misuse of the device.

Systems have firewalls, ulimits, pledge, acls, permissions, sometimes physical lock and keys to prevent users of the system from doing things that owners or operators of the system have decided should not be permitted. As others have mentioned, this might be for security, compliance, CYA, or just reducing the number of variables to consider in a system.


> This could also be considered ...

I agree but you've very appropriately used the word "could" here. The gp bemoaned Google not prioritising this issue as a serious security concern. Whether it could theoretically be classified under security if X, Y & Z were true, due to the to-the-letter definition of access control threat models, doesn't mean that in this specific case of a consumer device, that using a browser from settings is a high severity risk. Even if it were a bypass of something like Nessus/Crowdstrike/et al (and not just consumer parental controls), it still wouldn't represent a significant threat as a simple kiosk escape in isolation.

Any definition that classifies this as the gp is proposing is a theoretical nitpick, not an actual considered threat model.


This is literally a privilege-escalation attack: i.e. the user escapes from controls that are imposed on them by the device manager (which may well not be the user, but a corporate MDM platform).

Are you suggesting that privilege-escalation attacks are not security risks?


> Are you suggesting that privilege-escalation attacks are not security risks?

Nope. What I'm suggesting is that threat modelling is important. If attack vectors were classified equally based on technicalities we would have infinite surface area. Kiosk bypass might be vaguely categorisable alongside things like polkit exploits but they are not equivalent in any normal threat model.


No, you said they are 'unrelated to security.' Just admit you made a mischaracterization. It happens.

I agree with lucideer here. While I think the language chosen needlessly leaves space for pedantic arguments. They're correct that, from the context of google's software, none of these are relevant to the security that Google needs to care about.

it's true they could be a part of the things security needs to care about, but so is a phone catching on fire because of its battery. which in of itself is not directly a security risk.


>Nope. What I'm suggesting is that threat modelling is important. If attack vectors were classified equally based on technicalities we would have infinite surface area.

OK, so we agree that, your original statement (which follows) is wrong, because it makes broad, tacit assumptions about the threat model that are not justified?

Security is a field related to protecting device-users from malicious actors.

Whereas a more conventional definition of information security would also involve protecting systems from unauthorized access, including privilege escalations (that's the E in STRIDE, right?) that bypass controls that were intended to apply to the user.

Honestly, it's baffling to me why you're arguing this point.


I wouldn't say that escaping a control is always a privilege escalation. Browsing like this doesn't access any data, privileged or not, and you already had internet access. You're still in a very tight sandbox.

> Security is a field related to protecting device-users from malicious actors.

This is one of many aspects of security-- perhaps what Google considers most important on Android, but surely you can imagine some scenarios which we care about which aren't about an end-user getting attacked.

(Indeed, sometimes security is all about protecting infrastructure, assets, or information from device users).

Besides, the third point that you cavalierly dismiss above:

> > 3) Given the embedded browser is not secure, if a lot of kids do this, it WILL lead to someone exploiting this, and machines being compromised and escalations

directly relates to even your limited notion of security.


The third bullet point explicitly mentions the device being compromised, so I think it’s unfair to paint that as unrelated to security or just a parental-control issue.

As someone who remembers being a child, I'm glad there are still ways around parental controls. Kids are going to break rules, and that's fine. Making arbitrary rules unbreakable has always seemed iffy to me...

> Making arbitrary rules unbreakable has always seemed iffy to me...

It creates better hackers.


I'm not so sure about that, though I guess you are probably joking. Kids are given "smart" devices which make it easy to consume stuff, but nigh impossible to break out of, let alone create stuff on (or at least nigh impossible to create code).

IMO, it creates less hackers.


as one of these young “hackers” that has always found ways for circumventing restrictions, I can definitively tell you that every kid that uses these bypasses has a different level of understanding of the “hack”. for example, some of my friends use the bypasses that I make, and they don’t have to understand the tool to use it. so while there are many (s)kids using these, it’s actually a very small percentage that learn how to make the bypass themselves, and become “better hackers”.

How else will your kids learn to be l33t hackers without motivation?

>1) Kids WILL use this to bypass parental / school controls as soon as they learn about it

Doesn't sound concerning, especially the latter.


As a parent, it’s concerning to me.

It’s funny how I never thought it would be an issue but kids have real impulse control issues and devices are super easy to spend too much time on and contribute to negative mental health. Screen time controls don’t solve this, but they help a little bit as part of many other things to help people learn about how to self regulate.


When I was a kid we managed to hack into my school's lab's computers to install Doom and Warcraft II.

Good times.


I vividly remember cracking the admin PWs with ophcrack or the school WEP wifi with aircrack-ng.

> Doom and Warcraft II

If that's all today's kids had access to, I wouldn't be worried about it either.

I work with "average" kids today that have access to far more developmentally-damaging media, and I want the few kids that have parents that care enough to set up controls, to have a fighting chance.


Oh, I am absolutely sure of that.

I didn't mean my silly recollections there to be a way to handwave the concerns of people with parental controls nowadays. Those are important.

I just miss those simpler times. The most risque thing we got our hands on back then were low resolution porn clips. Perhaps some odd hentai AVI with mangled translation.

People used to be up in arms about something silly as Carmageddon being damaging to kid's mental health while truly awful stuff such as social media was brewing on the horizon.


For 90's kids it was video games that were gonna rot your brain and make you a bad person. That turned out to be false.

For 00's kids the new boogeyman is "social media". Likely will turn out false too.

Just sounds like a cop-out way to blame anything other than poor parenting.


There is definitely poor parenting at play here, and there are also way more, easier access, brain rotters today. 100 years ago, parents weren't giving their babies electronic pacifiers (tablets with YouTube playing).

I mean 100yrs none of that crap existed so yea no shit they weren't doing that. But I'm sure there were things of a similar nature that existed then as well.

Social media may not be "bad" but has serious consequences including it's use to try to overthrow an election.

Seeing some pretty concerning NEET "battlestations" online I'd think the parents of the 90s were right for at least some of those kids. Doing anything all day to excess to the detriment of everything else is bad, whether it be TV or video games or social media or whatever distraction comes next.

What’s funny is so did I. So I’m definitely a hypocrite in restricting what I was not restricted in doing.

As a software engineer with kids, let me tell you: not only do they already bypass parental controls on both Android and iOS (I've yet to figure out how on this one) devices, but they discover these tactics from other kids in school that were saavy enough to find tutorials online. It's not like when I was a kid and had to dive into the registry to find the key associated with the program, understand what a swap file was and why it might contain credentials, write my own code to circumvent the controls, etc. They're not really learning anything about the system when they bypass the controls.

Google outsourced work to low cost bidders and now they get low quality results.

Says the person whose last comment is upset that their engineers have a $300k salary?

Truth must have really hurt you, if you even went through my post history.

My earlier comment is guy earning 300k shouldnt say that things are cheap.

My current comment is that Google does not hire the best anymore, but outsources to lowest cost bidders and results are visible -> quality suffers.


What are they paying $300K for, if outsourcers are doing the work?

Probably designers who make new products (that are likely to get cancelled). While their popular cash cows are maintained by lowest bidders since it is boring / difficult.

Difficult, sometimes. Boring, no.

I didn’t, I just spotted your comment in the other thread.

As a security researcher, I have to disagree - there's many things to criticize Google for, but "cavalier attitude to security" isn't one of them.

Their security teams are industry-leading and they have done a lot of important work over the past decade (Project Zero, a very well-done bug bounty program, Advanced Protection, FIDO/hardware security keys, large-scale fuzzing and AFL, tons of behind the scenes sandboxing work, Linux kernel hardening...). They have a fine track record keeping their users safe (...from anyone but themselves and the US government).

> Given the embedded browser is not secure

It's a standard web view, which uses the same engine and is sandboxed the same way the standalone Chrome browser is. There's a few extra APIs injected into it, but chances are that they require authentication or simply check the origin. What makes you think they didn't take this into account when triaging the report?

There's hundreds of these web views with plenty of opportunities to "escape".

> Now, Google only fixes security bugs only after they've combined into a severe remote exploit

[citation needed]

Things like Chrome entirely rely on multiple layers of protection and, like any sensible vendor, they will absolutely fix a bug in, say, the renderer process even if there's no full-chain exploit.

> In some contexts (especially as high-stakes test settings, but also some military/prison/finance/medical/legal/etc. settings) this IS a direct security risk

In a kiosk or proctoring environment, you wouldn't be able to browse Google account settings in the first place. It's a non-issue.


> Their security teams are industry-leading and they have done a lot of important work over the past decade (Project Zero, a very well-done bug bounty program, Advanced Protection, FIDO/hardware security keys, large-scale fuzzing and AFL, tons of behind the scenes sandboxing work, Linux kernel hardening...).

I have to agree. Google has O(200k) employees, and included among those, are some of the best security people in the world. Indeed, many are left over from historic Google.

However, there's a huge difference between having high-calibre employees and having those employees impact the security of the huge numbers of products Google develops. Most of those employees do fine research, but have no influence on the typical Google product.

> They have a fine track record keeping their users safe ... [citation needed]

Let me tell you a story. I use Google Workspace Free. My account was compromised, not through much fault of anyone involved (long story, involving being targeted by a criminal actor who gained physical access to a device).

I wanted to collect records, go to the police, and have the criminal arrested. Google had clear logs of what happened. I found out that security was a value-added product. I'd need to switch from my version to a paid version, and could never switch back. The cost was going to be $6/user/month for the rest of my life, times a dozen family members, times 12 months, times another 60 years of life, which is around 50 thousand dollars.

$50 grand.

To get audit logs.

You can guess what I decided.

There was no way to prevent this retrospectively, but it'd be very easy to prevent prospectively. It just wasn't worth doing for $50k. The criminal is still out there. They might be targeting your home or business!

Thanks Google!

Another good story -- impacting a significant fraction of low-income individuals in the world -- is withholding security updates for Android after a few years to keep people on the upgrade treadmill. New devices have frequent updates. Older ones have slower updates, until at some point, the updates stop. Phones get compromised, and attackers do ransomware, identity theft, and other sorts of nasty things.

Thanks Google!

Security should not be a paid value-add. Everyone deserves security.

I could tell many more stories too.


> I wanted to collect records, go to the police, and have the criminal arrested.

That's not really how it works. The police can subpoena Google for the records, they won't trust audit logs you provide. Just file a police report if this is a real issue.


I don't think police issue subpoenas. They may be able to acquire a warrant to force google to turn over data, but would they actually do it? They would certainly require some sort of preliminary evidence or probable cause.

Of course your attorney could file an action against google (or another party) and a court could subpoena google's records to resolve it, but that's starting to sound expensive...


That's not how real-world police departments work, at least where I live. The police are lazy. They receive many complaints, and ignore most of them. Coming to the police with allegations and no evidence simply doesn't go anywhere.

Audit logs I provide won't be enough for criminal prosecution. They would be enough evidence to cause my local police to investigate, as well as adequate cause for a warrant to Google.


The prevailing take on HN and most other geeky sites is that measures meant to prevent users from fully using their devices - DRM, secure boot, etc - are harmful at worst and pointless at best. We usually don't get upset about iPhone or Playstation jailbreaks - we celebrate users regaining control of their devices. This is even though you can think of a malicious use or two.

What's different about this issue? That it gives us an opportunity to bash Google? And to make broad inferences about the company's supposed demise from a single anecdotal data point? Essentially every other company that attempts these kinds of controls will sooner or later find it bypassed, usually many times over...


Kids bypassing a default web browser is not a security issue, but a parenting issue.

I can confirm that this works as a bypass for IBM's MaaS360 for at least one organization

I don't trust google for multiple reasons, wont use them for anything important or start new services, I am careful to what I search, etc...

However, as far as my knowledge tells me Google is the best in the biz when it comes to security. While iPhones 0 click exploits cause the death of journalists and leak nudes of billionaires; The biggest 'Android' exploit Pegasus has requires going to some website, downloading an APK, going to settings, clicking allow install from web, then installing the malware. (Don't @me about Samsung hardware issues, if you cared about quality/did research, you wouldn't have bought a Samsung Android. Or heck, anything from Samsung.)

Google is a crap company that I barely use(at most degoogled services and the occasional search when DDG can't do it), but we should give companies credit when they do things well. It promotes competency over relentless marketing.


Not disagreeing, but consider the user base of iOS vs Android. iOS users are wealthier/etc., so exploits affecting them seem more likely to be “newsworthy” and, hence, more likely to be pursued (higher upside).

Similarly, consider how a sunken ferry that killed hundreds of migrants went largely unnoticed during the brouhaha surrounding the Titanic sub.


https://zerodium.com/program.html

Zerodium pays more for Android zero click FCP(full chain with persistence) than on iOS zero click FCP. Most other categories Android and iOS exploits pay the same.


Google is substantially wiretapped

This is baseless fud.

[flagged]

> It’s documented fact

Can you link any sources that document this?




https://news.ycombinator.com/newsguidelines.html

> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.


Are you pretending to be a moderator? Referencing widely documented news isn’t an insinuation. If you want to see the links yourself you can just ask - not every post here has citations linked even when already widely available/known (I see several people have provided several references already)

> You’re apparently an employee shareholder with a bias.

Don’t care

It is utter fud / bs.

Don't accuse me of spreading fud, it's widely covered news

Are you sure about biggest Android exploit?

Tests conducted by Project Zero confirm that those four vulnerabilities allow an attacker to remotely compromise a phone at the baseband level with no user interaction, and require only that the attacker know the victim's phone number. [1]

[1] https://googleprojectzero.blogspot.com/2023/03/multiple-inte...


Do you have evidence Pegasus used this?

To be exact, those aren't exploits of android itself, just the device it's running on. Not much of a difference in the outcome, but i guess it doesn't defeat the argument of google having good platform

This is a recent one. Whatsapp bug [1] exposed both Android and iOS few years back. And then there was MMS exploit [2] that affected close to 95% of android phones

[1] https://www.ft.com/content/4da1117e-756c-11e9-be7d-6d846537a...

[2] https://en.wikipedia.org/wiki/Stagefright_(bug)


While Stagefright "affected" them, it often wasn't exploitable. Not to mention that was 5+ years ago and they've since sandboxed all codecs, mitigating potential threats from that vector in the future.

a browser is benign technology. If you want to block network access, you install a network filter.

Back in my day we would run laps around the school webmaster and their site blocking. Eventually they gave up when they realized there were more proxy sites available for us kids to find than time they had to go through the logs and block this stuff on top of their usual IT workload for the week. At some point we also realized you could proxy a website with google translate, and that basically became as good as gold in terms of an unblockable proxy, because kids needed that website for language classes.

Good for those children. They have figured out to return control of the device to its user instead of it unquestioningly serving the interest of its owner.

> I don't think engineers at Google quite have a perspective on what it means when their decisions (not just security) ruin thousands of lives.

Doesn't that seem a little hyperbolic? "Ruin somebody's life" seems pretty dramatic.


Are you familiar with identity theft? How about browser-based attacks leveraged via Google vulnerabilities that allow malicious (corporate) actors access to your keystrokes (e.g., user sign-on information across all accounts)?

Google's response since mid-2022 has consistently been "deny deny deny" and downplay, much like you are doing here. Meanwhile individuals and small businesses are targeted and crushed. It's hard to know when your identity is compromised, and by the time you know it, it's usually too late to easily triage. To the extent Google introduces products and services that contain large-scale vulnerabilities, it is very much their fault. Yet, nothing happens. Individuals pay the price of using Google products, and Google continues to make billions of dollars, unscathed. Microsoft and Amazon are also guilty of this.

Where are actual consumer protections? Nowhere to be found, in the US.


Increasingly it seems iOS is the way to go for kids devices and management of the screentime.

Maybe folks here have had good luck with other android roms


>Google's increasingly cavalier attitude towards security is concerning:

>1) Kids WILL use this to bypass parental / school controls as soon as they learn about it

What an utterly ridiculous take. It's 2023 and there are a myriad of options available for kids accessing content they want to see without using some convoluted and hamstrung procedure.


> Given the embedded browser is not secure, if a lot of kids do this

Sounds like a parenting issue, not a software issue.


This reminds me of how I used to get around the filter in high school (early 00s). This was the early days of the internet, so there were classes where they blocked the entire internet because it was a distraction.

I found out that (I'm sure this is a known exploit at this point, but at the time it felt awesome figuring it out) if I went into Microsoft Word (or maybe Works?), went to "help" on the menu bar, and clicked on "About Microsoft Works" it took you to an instance of Internet Explorer that you could then use to visit any website you like.

I had a really cool teacher for those classes though, and I'm pretty sure he was amused (and maybe even proud) when he saw high schoolers in 2001 or whatever, finding clever ways around restrictions set up by the school. We may not have been doing what he had explicitly asked us for, but clearly we were learning something.


This brings back memories of older Windows versions, where you could push F1 and trigger various run commands through Windows Help

My first thought too - this feels like that "F1 -> Open Help File -> Other... -> right-click on explorer.exe and select Run" method of bypassing login screen circa Windows 95/98.

I had not thought of this ins multiple decades at this point, but I used to do this too! What a trip down memory lane.

[dead]

[flagged]

A similar workaround has been available in the 'about' licenses pages. Just follow a link to a license and then you have a browser. It's useful for getting a browser on car head units, Peloton bikes, etc.

Ha! I just went to the "Third-party licenses" page on my Pixel 6, and it loads a never-ending list of links. Looks like a list of all files in the filesystem.

Can you enlighten me where the "Third-party licenses" link is?

In Settings -> About phone -> Legal information

Reminds me of the trick in windows 98, where you could by pass the password input screen by opening help and open file dialog.

You can do that (Win95) https://www.youtube.com/watch?v=1UfNlRe-goY

Or you can hit cancel (Win98) https://www.youtube.com/watch?v=LHgjN_RwH6g

Or can you simply close the password dialog and wait (Win98) https://www.youtube.com/watch?v=Uk_SKw9hOpQ


Except on Windows for Workgroups and Windows 9x/ME the fact that you can dismiss the login dialog is intentional feature, so bypassing it through help is just a more convoluted way of doing something that should be possible.

It is feature because the login window is there primarily as an single sign on mechanism for remote network services (which obviously would not work when you just dismiss it) and there is no security boundary between local user profiles.


May be, but it was used as a security mechanism in my high school so that we couldn't use our computers nor in designated time.

This is a bit like accessing the internet from chm (help) files when the browser was blocked.

Damn. I revealed my age!


Or using it to reinstall games that had been deleted by an administrator…

Are we revealing our age through exploits now? How about "using gopher on a university library terminal to access a site that launched a telnet session so you can check your out-of-state college email over the summer"?

I definitely didn't use the open dialog in Notepad to run other executables at the library.

WinPopup LAN messenger!

I was a student at a boarding school in the 90s that left WinPopup installed. I quickly found it and taught the whole campus. They tried to remove it but we had backups by that point. Then we found the teacher access internet proxy which had none of the student restrictions and shared it with everyone via WinPopup… I told the staff when I graduated.

F5 in notepad or use .LOG as first line.

Heck, in the last 5 years I've used this to create and run bat files on supposedly locked down citrix boxes :D

Been there done that.

https://www.nyx.net/

Allowed free shell accounts and the college board in GA offered free local dialup access to a Gopher server (Peachnet).

I had a terminal script that navigated through the Gopher menu until I could get to Nyx.


Wow - they still host user websites, some of which are from the 90's, with iframes, patterned backgrounds, etc. So cool.

https://www.nyx.net/userhomes.html


I find it quite amusing that you are warned about “large websites” they host where the pages are over 25Kb

at my high school we used to open windows explorer by the file open dialog in notepad to escape the weird bookshelf shell.. i think it was an ibm product.

Shit, wish I'd thought of that when I was in school.

My brother uses a similar trick to get to a browser to bypass login on locked Android devices. It blows my mind that they can't see the security implications of this.

Getting to a browser isn’t really a security vulnerability; many devices will even have a “guest” mode that provides direct access to the internet.

Getting to a browser is an open gate. Why leave the gate open?

Letting your kid or younger relatives use up your mobile data from your locked phone might not be a vulnerability but definitely isn't the expected behavior.

If you have physical access to the device it was game over to begin with.

A similar trick must be used when resetting some old Android devices, can't set up account because date/time is wrong, can't set date/time because still on set up account screen

If your kid can figure out how to access banned websites by discovering this hack, I think they deserve it.

Sure but they might've not discovered it just read it off internet or from other kids.

Still better than 99.999% of the other kids out there :)

Both of which are forms of discovery

I can very easily "discover" how to make an illegal <item> in under a minute using the internet, does that mean I should be allowed to have it?

I'm pretty sure your response is not what the GP meant.


Sure. And if there are consequences, face them.

Generally kids are not able to comprehend or foresee all of the consequences of their actions which is why parents and their communities set rules/restrictions for kids.

I'm curious as to the reasoning behind why someone apparently disagrees with that.

Likely a teenager ;)

Generally humans are not able to comprehend or foresee all of the consequences of their actions…

Thus saying kids can’t preform superhuman feats isn’t an argument that they should be required to do anything.


Oh, so you've been a teenager before, too? :)

(I recently saw a fantastic episode of "The Mind, Explained" on Netflix that describes this phenomenon well, along with why it happens: https://www.netflix.com/watch/81273770)


"if there are consequences" is begging the question.

Sure but that's like discovering how to make pizza vs discovering how to put a frozen pizza into a microwave.

You'd be generally more proud from the first one


Depends on how much you cook and how good you are at it, right?

When I was a kid, I was damn proud of "discovering" Sub7 and using it to fuck with all of my friends, teaching them to do the same. Years later, I would "discover" how to read assembly by reading a book on it and then "discover" pirated copies of various disassemblers online and use them to reverse engineer games, write keygens, etc. Years later, I would write my first 0day exploit and then eventually make a whole career out of that.

But I was just as proud of discovering and using Sub7 to mess around as a kid as I was popping my first shell. I just knew more at each stage; the act of 'discovery' felt little different, though.


idk what sub7 is, but this is a great post. i like it a lot. sometimes i find it easy to forget what learning is and how it can take place and that we learn from each other and each other's work and sometimes we discover something that no one has discovered before -- it is all learning. thanks.


The amount of preteens I've seen that can't even feed themselves these days without help from their mom, I'd say microwaving a pizza is an achievement.

I used to do the same thing in the lobby while my parents were banking.

Those "best viewed in Netscape Navigator" tags were golden for this. Workflow was almost exactly the same: click through until you get a "best viewed in" tag and get from there to a search engine.


I've used this to get to a browser that almost does full screen videos in a borrowed Tesla, from the YouTube app.

Then navigated to some dodgy adware infested streaming site and it was working okay until changing to another video, when it froze the whole Tesla computer and the car needed a hard reboot


"How's 2023 going?"

"Well, I played a video and crashed my car. …not like that."


The hidden help browser features in some FRP bypass methods on some older versions of Android. I used it to rescue test devices left by former colleagues.

This guy's experience reporting a bug to Google reminds me of mine:

Me: Here's a bug in Google Sheets that exposes deleted content to third parties.

Google: Not a bug. Working as expected, closing issue.

Me: Really? I was personally harmed by this bug while using the application.

Google: Actually, it is a bug but it's a longtime known issue, therefore you are not eligible for bug bounty. Closing issue.


Is it fixed now?

Probably not.

Google? No, if anything, it's getting worse... They would need to miss more than just cloud and commercialised LLMs to be truly shaken I am afraid.

Sounds like ChatGPT..

That is, to a T, almost identical to my experience reporting a vulnerability to Google too.

Me: Here's a bug in Gmail that allows spoofed email to scrub DKIM failures and appear legitimate

Google: "Won't fix (Intended Behavior)"

Me: Really? Google intends to allow spoofed email to appear legitimate in its interface?

Google: Actually, it's a known issue


That is our experience with Microsoft as well. We have submitted two separate email related vulnerabilities with O365, one if which we would consider rather serious. We took our time to create a detailed report, with steps to reproduce, etc.

For both you hear nothing for about 10 weeks, then it is either closed as "expired", or "won't fix".

Last time I checked, both vulnerabilities still exist.


Provide a 90 day timeline for the release of exploit.

I wonder if you can string it further.

You: We are OK to publish a blog post about it then, right?

Google: ...


Alteration:

I've written the blog, and I'm going to publish it on...


Well, it IS a known issue after you reported it to them in the first place! :-)

I mean, technically it is a known issue now that they know about it ¯\_(?)_/¯

But it was known, the second time.

Same experience. Receive hot air and fluff from Google, then a few months later the head of TAO announces a "fix" to the vulnerability originally raised.

Additionally, the first articles announcing the vulnerability tried to link it to Chinese/Russia hacking. Shove that propaganda directly back at yourself (i.e., Google; a US-based company). Google left the backdoor wide open for anyone to exploit, foreign and domestic. And, it was definitely exploited by both.

Google has real issues. Not sure what happened after 2019, but it's not great.


I have an addition to this story:

Me: here's a bug in Google Play Services.

Google: Not a bug, working as expected, closing issue.

Me: posted the bug on my blog, and it's get extensive media coverage.

Google: It seems we were wrong! It is indeed a bug. We will return to you in a few weeks.


Did some investigation.

So when you click on "Manage my account" you actually get taken out of the settings app and into an Activity (name for the "screen" God object on Android) embedded inside of Google Play Services. Eventually, following this the browser is

    com.google.android.gms/.auth.folsom.ui.GenericActivity.
This doesn't seem to be using the default system webview implementation, as on my phone that would be Chrome.

Android allows you to build a JS interface between Android code and Javascript code using addJavascriptInterface[0]. They seem to be doing this...a lot in GMS, which is an interesting attack vector to look into later.

Our suspect "mm" interface is in MagicArchChallengeView. Which gets you an obfuscated "bwuz" class as what mm links to. bwuz seems to be pretty empty though, again linking out to a few obfuscated classes.

Doing a straight string search two classes expose these functions, "qvc" and "pdn". pdn seems like the meat, while qvc has some helpful error logs exposing what each param is.

Looks like setVaultSharedKeys expects a gaiaId (Google Accounts and ID Administration ID), and a JSON array of JSON objects with two values, epoch and key. It creates an arraylist of them and passes them off to an abstract class that is everywhere in the package, but seems to be really involved with account security.

addEncryptionRecoveryMethod expects a gaiaId, a security domain list, and a member public key. It again packages them into lists and passes them off to the same abstract class mentioned above.

That's where I drop off because I have to get to work. Interesting though and warrants further exploration, both on this specific interface but also the others they expose through GMS into webviews.

[0]: https://developer.android.com/reference/android/webkit/WebVi...


I wonder why they’re not using the system default webview… Does this mean it’s WebKit instead of Blink? If it is Blink, it seems likely that it’s not as up to date as the one provided by Chrome.

EDIT: just noticed the docs link, yeah it’s WebKit.


It's not actually WebKit, that's just a package name that was poorly chosen a decade ago.

https://developer.android.com/develop/ui/views/layout/webapp...

It's the system WebView, it's just not using Custom Tabs.

https://developer.chrome.com/docs/android/custom-tabs/


Ah, my bad!

I was confusing WebView with custom tabs, sorry about that! Been a while since I've needed either for anything.


Alex Russell talked about Android's WebView not being Chrome at State of the Browser in 2021

https://2021.stateofthebrowser.com/speakers/alex-russell/


Huh. Where did he say that? WebView is Chromium and it's updated alongside Chrome updates. There are differences because of the process model in apps and some APIs aren't available...

https://chromium.googlesource.com/chromium/src/+/HEAD/androi....


> gaiaId (Google Accounts and ID Administration ID)

Wow, someone at Google is undoubtedly proud of coming up with that for what I assume is essentially a Google world wide unique ID, and for good reason.


Reminds me of the "Agents of S.H.I.E.L.D." scene where someone quips "Sounds like someone just really wanted it to spell 'shield'."


What is somewhat more concerning is that depending on the particular Android version and OEM customizations there are various way how to get into this browser or even complete Android Setting screen from the on boarding flow. Most FRP-bypass exploits involving only user manipulation of the device are built on something like that.

[dead]

Side point: When you rent a Tesla, please do not sign into youtube. Subsequent renters can see your whole Google account (including family members, phone numbers etc).

Or at least sign out before you return the car!


What makes this Tesla specific? This seems like pretty standard behavior on public computers or TVs in hotels.

Yeah that really sketches me out because people log in to Youtube but that's their main gmail (or worse, google apps) account usually. I know Google probably requires re-auth if somebody tried to use that session token again from a different device .... probably.

On the other hand, you can use a very similar kiosk escape on the Tesla YouTube app which is just a webview to get a full screen browser while in Park on the Model 3.

There is a Factory Reset and/or a Clear Browser Data function under Car menu > Software or Service iirc. The car remembers your navigation locations too, which can usually only be removed by either deleting your driver profile or resetting the car.


I can't find the hamburger menu, can anyone else? Maybe a screenshot would help.

This is exactly how I used to bypass the parental control application on windows when I was young. I only had 1 hour of computer time, after which the tool would close all applications on my PC except Microsoft Office Apps (for productivity). After a bit of clicking around, I somehow managed to open a browser in Outlook and play flash games on Miniclip.

Reminds me of how we worked around how the macs were locked down in high school. You could only launch certain applications and you couldn't open System Preferences but in Safari you could edit the default web browser. We would change the default web browser to Terminal (You could open Terminal but it was limited in what it could do, like it would fail at opening other apps) and then open Word, make a link, and click on it. The Terminal instance that opened had more privileges than opening it by default and using this instance you could run `open /path/to/your/app`, for example a game on a disk you inserted.

I remember our study hall teacher coming over at one point and asking if we were allowed to play the game (Starcraft) and the answer we gave (still can't believe it worked) was "Well these computers are pretty locked down as you know so if we are able to play this game it must mean the school is ok with it", which he accepted.


> We think the issue might not be severe enough [...]

It might not? In other words, if a security vulnerability is reported, assume everything is actually fine until proven exploitable beyond any shadow of a doubt?


mmm my kids *will* use this if they find out about this.

What would be a way to find out if they did? Does this leave any trace?


If they use it often, then the Settings app will have a longer screen time compared to normal usage I'd guess.

Here's the fun thing- it doesn't! If I remember correctly, the only place you can see that something's going on is if you check foreground data usage for Google Play Services

This is very similar to the FRP bypass that I've performed on Pixels. (They were donated to me and didn't realize they were "locked" until months after I'd been given them)

ITs just systemWebView surely and is one session; its just chrome still etc... you can easily get to that. I don't think there is any issue whatsoever.

Further " Secret " is highly inaccurate; this is easily known public knowledge..


A Siemens program I recently installed shipped Firefox with it to display the manual

Ah! The good old days. The Windows XP Calculator app had a browser I used to bypass the browser blocks in elementary school. :D

Okay, I'll bite... How did you bring up a web browser from XP Calculator? I know there's HTML Help, but I don't see a way to get on to the internet from there.


Thanks, did not think to check the System Menu.

I remember having to do tricks like those to unlock an account-locked Android device once (the kind that wouldn't go away even if you reset the phone).

The browser could be used to download an APK which triggered the Google Account login screen. Then you could login with a throwaway account and that would unlock the device.


Reminds me of the Switch, which has a built in fully functional web browser, but it’s only surfaced when connecting to a DNS server that requires a password as far as I am aware.

In order to login through some captive portals on restricted networks you need a browser. Presumably you sometimes even need JavaScript, or else Nintendo likely wouldn't have included a JS engine since that greatly increases the attack vector.

It would have been possible to design a system of letting users read the ToS of a network and enter login information without needing an entire browser. Granted, it's probably safe to assume most captive portals aren't trying to exploit your non-traditional computing device (such as a Nintendo Switch), and if a user is changing the DNS to evade a captive portal and go to some other site, then any exploits that occur on their device are kind of their own fault, but it still seems like a suboptimal system. Either you're going to have a bunch of exploitable devices that otherwise would in practice be secure since they need to have a web browser, or you're going to have devices that straight up can't access many networks (since they don't have a built-in browser.) I'd argue the latter problem is even greater than the former: web browsers are incredibly complex! If your device is capable of running an already existing browser (e.g. Linux or BSD based systems), then it's not that big of a deal. But if it isn't (e.g. certain embedded systems), then it sucks, though there are sometimes workarounds to accessing captive networks without an on-device browser (e.g. AppleTV lets you login through captive portals on iPhone or iPad — works for people in the Apple ecosystem, but that integration isn't as easy with devices from unaffiliated companies.)


The Switch not only has a web browser, but a web server! (afaik its only used when downloading from the Switch's media browser app to your phone)

Haha, I spent an embarrassing amount of time hunting down browsers hidden in apps in the past. The same thing exists in iOS and bypasses for example a time restriction on Safari or Chrome, but can't bypass domain ban or domain time limit restrictions (also included in parental controls). You can access google.com from many apps, I'd say probably half of them. Especially Apple Support app and Microsoft and Google Apps. Those apps always have external links to for example their Terms of Service and you can access internet with not much difficulty, although procedure varies for each app/external link. Also those in-app browsers persist history and website data and there's no way I know of to delete them. iOS Settings app had a couple links which opened in-app browsers which were able to bypass all restrictions but they're all gone since some time, except literally one. Apple, if you hear me, it can be accessed this way: Settings > iCloud > Family Sharing > Screen Time > Learn more about Family Sharing. I am able to access internet from there: Scroll bottom > About Apple > Apple Leadership > Albert Gore Jr. He has links to Insta, twitter and google books (which can take you to google search and youtube) on his site.

Me too, I have a folder in my android called "breakthrough" and it has 20 applications, including Zoom, waze, Spotify, and so on... apparently, now, it should also include Google and maybe even the Settings :)

Side note - I built a tool to help detect non-default javascript variables attached to the global scope https://github.com/jonluca/Window-Differ to aid in security analysis like this. Would be pretty nice to have in devtools

Very cool ! To ensure there is only one JavaScript interface, I edit hackability to be without popups and print, then host it. Your tool sound easier. https://portswigger-labs.net/hackability

Honestly, this is pretty sloppy and Google should know better.

I've done a lot of work with WebView on Android and it's a straightforward process to intercept requests to whitelist domains. It's well trodden ground [1], especially for apps that use the WebView for their entire UI. This is an oversight by everyone from the dev team to the product manager to the QA working on it.

1. https://blog.oversecured.com/Android-security-checklist-webv...


Can you imagine if kids actually started using this? mm.closeView() to instantly boot any pesky children off of your website, the ultimate age gate :)

The amount of "but parental controls!?" comments in here is kind of shocking. Has something changed about the hacker spirit around kids?

This is an ancient form of exploit that was (and maybe still is) very popular with Windows: from the ctrl+alt+delete lock screen, you could open things like Help and accomplish similar actions, eventually getting access to a browser (which, in Windows' case, was also ~Windows Explorer with full file system access).

This is telling on my age, but I loved these hacks as a young teenager on early versions of iOS. Parental control existed but didn't touch all areas of the platform.

Using this webview for using bard

Legal | privacy