Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> One thing that Safari is incredibly good at (and its' engineers are proud of) is the ability to reduce resource consumption for non-active tabs and windows to nearly zero.

I understand this is rare, but I just have a very very hard time understanding what is so hard about this.

Page not visible? Just stop it. Stop the layout engine, stop the javascript VM. If there was a transfer in progress, let it finish, a page might get the callback when it's active again ...

(I know there are some tricky cases, a lot of them, but there just isn't any reason why a browser with 30 tabs should be consuming over 1 % cpu...)



sort by: page size:

> Safari is not known for being slow.

Yes it is slow on older hardware. It was slow even back then. You must have forgotten the time it took to paint the rest of the page if you scrolled or zoomed out a liiittle too fast.

Sure this was due to the minuscole amount of RAM Apple ships their product with, but “Safari is slow” is appropriate.

Also “Safari is slow” on my i9 as well, I just need to open a GitHub PR with 10+ files to see it come to a crawl, whereas Chrome never feels it. But hey, its scrolling is buttery smooth even if clicking doesn’t work.


> Basic pages, like Messenger, Facebook, and Gmail.

You belay your ignorance of webdev. Those three sites are all extremely heavy web applications - barely even "sites" - and I'm willing to bet you they still are faster on Chrome than on Safari (on non-Apple platforms).

> The same pages with the exact used browser features in Safari use just a fraction of the resources and actually provide a better browsing experience!

Instead of using your own subjective perception, you should try running a series of standard performance benchmarks, like the SunSpider JavaScript benchmarks[1] on Chrome and Safari on a non-Apple platform - then see if your perceived performance gap remains.

[1] https://webkit.org/perf/sunspider/sunspider.html


> which is not a particularly light workload given how heavy the web is these days

I think a lot of this might be attributable to just how well power-optimised safari is. I could get 10 hours using safari when my macbook was new, but only 6 using chrome or firefox! Given that safari is speed-competitive with these browsers, it's very impressive.


> Look, most modern software is spending 99.9% of the time waiting for user input, and 0.1% of the time actually calculating something.

So why isn't my browser at 0% CPU when IN THE BACKGROUND then?


> Chrome will happily use as many of my 12 cores as it likes and it just doesn't even slow down.

I spend a lot of time profiling browsers and I don't see a lot of multicore usage when browsing in current engines. Usually layout and painting is happening for one page at the time—the tab you're looking on—and multicore performance for them is really poor in current engines.

I have not seen browsers saturate 12 cores. For example, go to [1] in your browser: no browser engine saturates maybe more than a core and a half as it chugs along struggling to reflow.

I think that Chrome- (and IE-)style per-domain process-based parallelism provides some benefits, but mainly in getting stuff like Gmail and Facebook notifications off the main thread, not really in improving throughput.

[1]: http://www.w3.org/html/wg/drafts/html/master/single-page.htm...


> Sizes are being used as a fuzzy proxy here which makes sense—putting a cap on CPU usage and memory is a lot harder to pull off. Is focusing on size ideal? Probably not. But not that far off base either.

No: this doesn't make sense :/. The core problem is that stuff sits around executing some tiny input handler or animation in a loop, burning CPU. When I have tracked the tabs that are the worst performers down to the code causing a problem, it is never a large amount of code: it is some stupid mechanism that polls the position of something (like the cursor or the scrollbar), or is trying to push some analytics to a server.

This really has nothing to do with the amount of code being downloaded. I realize some people care complain about how much stuff they have to download, but that just isn't what is actually causing most people problems. Sure, tracking CPU is sort of annoying, but it absolutely isn't hard. Chrome already is running these things in separate processes (for security), and the operating system is tracking the time used for each thread: you can just ask it and make some kind of limit if that is what you care about.

I mean, in this article I see ideas for size limits for images, which is at least consistent... but that is going way way too far: 1MB just isn't good enough for a reasonable image. If you care so much about bandwidth, make a bandwidth cap for the page and if it exceeds it--across all media--figure out some way of blocking or punishing the site.

What most of us care about is that there seems to be no limit on the CPU usage of any given page. This is easy to fix--it is a virtual machine, after all!--by just doing the same trick Erlang uses for compiling a preemptive fiber and then limiting its time execution slices.

What I know I care a lot about is when a tab I haven't looked at in three days is suddenly using CPU time _at all_. Just make it so background tabs get severely limited in their ability to do background execution and eventually get stopped entirely, and the problem is essentially solved.

(Chrome, which is apparently already big on these size limits, doesn't do this, and I swear it is because it is against Google's interests to do it as it mostly makes it more difficult to do stuff like tracking and advertising :/.)


>If I'm not running high-cpu tasks and just web browsing,

You state this as if they are 2 different things. Sadly, more and more shittily designed sites are using more and more resources. Whether that's just a poorly written bit of JS or a maliciously written bit of JS, web browsing is becoming more compute intensive.


> For example: optimize PNGs, it's easy and a one time procedure.

And then that PNG is loaded by a webpage, which contains no HTML, but loads 28 megabytes of JS, which your browser has to parse, JIT and run, and then that triggers loading of web-fonts, and after that we create a virtual DOM, and then render that into a real DOM... (which is probably around 150kb)

And then after about 15 seconds of 100% CPU time, the browser finally has something to show... And then it loads that minified PNG.

You know what? That PNG isn't the issue. That PNG probably has a highly optimized decoder, written in native code, and the relative cost of a optimized/unoptimized PNG in this case is probably 0.0001% of the total energy we just spent getting and rendering that page, a page which more often than not contains basically static content.

If that page instead:

1. had been plain, pre-rendered HTML

2. had no JS, except if needed.

3. had no web-fonts, because the user already has 2000 fonts installed. (And you want to be green, right?)

4. And finally had that PNG.

Then optimizing that PNG would actually have an impact. On most sites today though? Not a chance in a hell.

Also: that page would render in a nano-second, so it's not just a greenification, it's an actual real world performance-optimization too.


> [...] the browser runtime environment which includes a slow and complex DOM

Counter to this narrative the DOM is very fast, and compared to many other platforms it's much faster at displaying a large number of views and then resizing and reflowing them.

ex. Android warns when there's more than 80 Views, meanwhile Gmail is 3500 DOM elements and the new "fast" Facebook is ~6000 DOM elements. Neither of those apps spends the majority of their time in the browser code on load, it's almost entirely JS. Facebook spends 3 seconds running script on page load on my $3k Macbook Pro. That's not because the DOM is slow, that's because Facebook runs a lot of JS on startup.

If you cut down to the metal browsers can be quite fast, ex. https://browserbench.org/MotionMark/


>Having a browser open uses approximately 0% CPU though.

Not on the modern web it doesn't. A lot of pages are continuously running background tasks and refreshing over time.


> You might ask, what about modern websites, built with D*t compiled to webassembly, GPU acceleration, reactive frameworks, material design and capable to load the multi-core CPU at 100%? I am not using such sites so I don't care.

Does this just mean that browser developers are optimizing for the right thing, just not something that benefits you? Tons of people use these sites.

> If the system starts swapping, it becomes orders of magnitude slower.

Not really. Browsers try to keep stuff in swap that they probably won't need. Swapping doesn't become a problem until you're almost out of memory as well, and then you might get thrashing. But there's a wide range where CPU optimizations make sense. And such a large fraction of people have SSDs that even swap access can be pretty fast.


> But when it’s on every page, from a web performance perspective, it equates to a lot of data.

Author of this article is apparently unaware of browser caches.

> JavaScript is known as a “render-blocking” resource.

Yeah, if only there was, like, an async attribute or something.

> The graph below shows time to interactivity dropped from 11.34 seconds to 9.43 seconds

So, jQuery is way too heavy for these people, but interaction tracking analytics (these packages usually start around 200kB) is perfectly fine?

> total page load time dropped from 19.44 seconds to 17.75 seconds

Burying the lede here, if my team celebrated a 17 second page load, we'd be fired on the spot. Going out on a limb here to suggest jQuery is the least of their problems.


> So if a page is running a computation in the background, for example, rendering some real time chart or something, you'd rather it continue to do work and waste battery life, then to throttle itself?

Yep, that's the behaviour I expect. If I want you to stop I'll close you.

> But then you'll say "well the browser should just throttle any tab i'm not looking at"

Why would I say that? I explicitly don't want pages to act differently when I'm not looking at them. If they're updating when I'm looking at them then they should keep doing that when I flick away to something else.

It's a privacy issue and it's a functionality issue. It's literally none of the page's business whether I have it in view or not.

Next you'll be telling me that pages should be able to use the camera and face sensing tech in smartphones to detect if I'm looking in the right direction. For my convenience of course, nothing to do with advertising, tracking, analytics...

FYI I've managed to find an addon for firefox that blocks this API. No noticeable difference in battery life.


> We carefully monitor startup performance using an automated test that runs for almost every change to the code. This test was created very early in the project, when Google Chrome did almost nothing, and we have always followed a very simple rule:

This test can never get any slower.

That's actually very similar to how Safari was built.

After Steve showed the Safari icon, he clicked to the next slide. It had a single word: Why? Steve felt the need to say why Apple had made its own browser, and his explanation led with speed. Some may have thought that touting Safari performance was just marketing, a retrospective cherry-picking of one browser attribute that just happened to turn out well. I knew better. I had been part of the team that had received the speed mandate months earlier, and I had participated in the actions he now described which ensured the speed of our browser." [1]

[1] Kocienda, Ken. Creative Selection: Inside Apple's Design Process During the Golden Age of Steve Jobs


> The faster a CPU finishes, the faster the CPU can sleep, which is how you save battery life.

Yup.

> The more cores you use the lower the CPU frequency can be, which saves power, since frequency increases do not use power linearly.

More cores in use has little to do with frequency and more to do with heat. More heat means more thermal throttling which lowers frequency. Lower frequency means that the CPU doesn't sleep sooner.

> I think it goes without saying that javascript workers have nothing to do with the design and layout of a webpage.

Yup. That's exactly why I don't want them. Why should I execute something which doesn't, and shouldn't, have anything to do with rendering page content?

Don't get me wrong, I'm fine with using more cores if it's actually beneficial. But every use I've ever seen for a web/service/javascript worker has always been user hostile by taking what should be done on a server and offloading it onto the user's device instead.


> The idea that apps should intentionally throttle their CPU usage to avoid "slowing down other apps" at the expense of performance is nonsensical. Apps should use all the resources in the machine to get the job done faster and go to sleep faster. The kernel, not userland apps, is responsible for multiplexing the system resources: that's literally its job, and only it is in a position to do so properly.

Did you knew that there are apps running - like IDEs, servers, games etc which requires a large amount of resources? If you run another resource hungry app it might affect your first app's runtime which would end up frozing both. Have you ever used resource hungry apps?

> Nothing you've cited has been evidence of a leak. Adding a global garbage collector would do nothing to reduce the (non-)"leaks" you observe.

600MB RAM for nothing is leak. Denial is futile.

> Enabling those via about:config tweaks and addons slows down your browser. Session and history limits do not affect your browser performance.

1. It seems like you don't have experience with browsers. Go and try them out instead of denying everything. Do you know what is prefetch at all? Or at least pipelining(cheat: https://en.wikipedia.org/wiki/HTTP_pipelining)?

2. Session and history limits DOES affect browser performance at long runtime. If you limit these, your browser will consume less RAM(+ it's easier to find in small history) and if another app wants to use its resources the os will put the memory on the swap then it will come back faster.


> But overall, I think you overestimate how much time you spend loading the website and how much time it's just sitting there, mostly idle.

reading the website is productive. Waiting on it to load is not.

I'd rather have app burn a whole core whole time if it cut 1s off load time when clicking something.


> Despite fully reloading the page, the HTML version of Gmail consumes fewer network resources (~70KiB) and takes less overall time to return to interaction.

Reminds me of the article "Browsers are pretty good at loading pages, it turns out" [1]. It's really shocking that we've got technology that from an a priori standpoint has the capability of making web browsing faster, yet consistently due to poor development practices somehow it manages to make everything worse.

[1] https://carter.sande.duodecima.technology/javascript-page-na...


> Web browsers are faster than ever before, but web sites are bloatier than ever before, eating up all the hardware and software capabilities.

Are they? Have web browsers actually gotten any faster at all over the last, say, 5 years? 10?

JS engines got a bit faster, but what about CSS & HTML parsing? 2D rendering performance? Layout engine performance? DOM performance? Mozilla made a bit of noise about this a year or two ago with their whole Project Quantum push - but had you ever heard a peep about this stuff prior to that? Or since? Nobody benchmarks this stuff, and yet it's insanely critical to interactive performance. But since it's harder to measure than JS performance, the only thing ever measured is JS performance. And occasionally, rarely, page load speeds.

Open up a 10MB plain text file in Chrome and it completely falls over. Zero JS. Zero CSS. Zero HTML. Just plain text. Are modern browsers really fast?

And for what it's worth modern computers are wide - 4 core with SMT is damn near low end these days. Yet the web is still incredibly stuck in the single-thread mode of operation. Both the browser internally and the platform itself (WebWorkers are far too slow, heavy, and restricted to meaningfully be used to offload interactive work). And there's almost no work being done to address this. WASM's threads are the only sliver of light here on the platform side. Is it really surprising that people throw RAM at the problem as a result? Throwing more caches at things is the natural response to being heavily starved for CPU on the single thread you can use.

next

Legal | privacy