Or usability of this feature would be so frustrating, that no one will use. Constant security popups on every start, very limited available API for apps, etc...
The features they are adding are often unnecessary and designed to help them further lock down the web, prevent ad blocking, etc.
I’m surprised no ones even noticed that Portals are basically AMP Supercharged to where you’d never leave Google. Truly dystopian future they’re trying to slowly cement.
You are deluded. Modern internet developers don't do this sort of thing. Not what you've described. Not as you've described it.
It's never about deploying features that can't be explicitly controlled by the mothership. Furthermore, the graphics processing you're envisioning will consume battery power and might be kiboshed by the end-user's OS arbitrarily. That alone (lack of OS control) gives developers a plausible way to rationalize the orwellian.
You want things to work in a way that will never ever happen.
I'm fairly accustomed now to disabling this shite, but if it's only going to become more prevalent I could see it creating real toil. Are we also going to need a PiHole-like solution for servers and dev boxes?
They instead cripple it by restricting important functionality to native apps, e.g. WebGL and push notifications, and by preventing users from modifying the system to remove these artificial limitations.
As long as screenless devices and other unattended things like light bulbs didn't need manual intervention, and it was easy to turn off the checks on a Linux server, it would be cool.
But it would probably be an annoyance in large multi hotspot environments, and could become another thing to train people to mindlessly click through like cookie prompts.
I don't think it's a problem of the developer not knowing, they generally do know how much storage they'll need (or at least put an upper limit). It's a problem of trust.
Maybe the dev things they'll need 1GB but I'm not ready to give them that. The same way the apps asks for certain privileges when you install an app on, say, android.
I wonder why anybody thought it was a good idea to let any web page store some random stuff on my computer. Cookies were bad enough already.
You know, you can say what you want about Flash, but I least I could block it. These days I can't browse half the web if I disable javascript. One of these days I'll just run my browser with a separate UID just so I can keep it under control. Or better yet, a VM, since it appears people want to turn the browser into an OS anyway.
This would be a lot easier to believe if they allowed you to stop apps from accessing the internet. As they don't, I simply don't buy any argument they make from a privacy or security perspective.
Hard to argue with the economics of that mitigation though. The abuse:legitimate use ratio is probably pretty high. Getting rid of user agent strings will bring back the scaling problems, as they should probably be addressed directly.
>Fewer gatekeepers, lower development costs, tighter feedback loops.
There are far more gatekeepers when you've got an application spread across the internet. The ISP or Cloud providers can lock you out, you can lose your certificate, DNS, connectivity. The browser, ad-blocker, or a random change to java/javascript, etc can change your results. Governments of all levels, along with their spy agencies, and large profit driven corporations can intervein between your code and the end user.
A cable could get cut, or WiFi go down, a Telco or Apple or Google could decide your "app" isn't in their interest.
>more users -
I agree, but they're used to "free", and aren't always the customer.
>lower development costs
How is having an "app" that works through all the above simpler than a program installed (or just thrown into a folder, and run as a "portable" program) not far easier to support?
You can write to the screen almost directly, and probe files/folders in the same way, with no gatekeepers.
>tighter feedback loops.
Since you can't run it on your desktop and be confident, you have to test across all your users, or a wide swath of output devices, browsers, OSs, etc.
Because then you don't impede people who might need a new widget installed because everyone and their mother in IT needs to try it and test it before you can use it.
Especially at Google scale, where the BeyondCorp system described in their papers could automatically see when an endpoint was doing something naughty, block the user from accessing corporate resources, and give the user information on how to fix it, instead of blocking them from installing anything (even if what they want is perfectly harmless).
Though this is probably a good move, I think it's going to catch out a lot of bad web apps based on my experience of trying to disable it at a web hosting company :-)
At the very least, this might spawn some discussion around being able to remotely enable/disable SDKs, from a server that you control. Last week it was Google Maps, today Facebook SDK...
reply