> For the Edge browser Microsoft has again decided to develop their own rendering engine, instead of relying on open source alternatives (Blink, Gecko, WebKit). I have to doubt their decision as a browser engine is a big development effort and they need to be updating it from now to eternity.
Um, why would they move when they already have an engine? Sure you can argue they should have moved to WebKit or Gecko or even Opera's rendering engine years ago but Trident has been improved significantly over the years. Don't forget Edge is based on Trident (so it's not entirely new or anything as implied in the post; it provided a good base to rip legacy support out of and to improve other things).
Beyond that though it seemed fine just kinda felt like the author had a bias against Microsoft from that first statement where he / she questioned their rendering engine decisions. I'm also not entirely convinced they tuned explicitly for the benchmarks they called out; sure they may have but I don't know that I believe they necessarily would have had to.
I sincerely hope they NEVER use another engine. Diversity is our friend, and having multiple engines all on similar terms means that bugs won't end up as "correct" and it helps push everyone forward.
If they used an open source engine, I would hope it would be that they would open source their own engine. That would be nice to see. Considering Microsoft open sourcing other projects, and this particular product they give out somewhat freely already. Would also allow a community to build up around it, and report as well as possibly contribute bug fixes.
I thought the complaining that Microsoft wasn't using WebKit to be hilarious in light of Google's forking WebKit -- if Google and Apple couldn't collaborate on WebKit development, there's no reason to suspect that Microsoft and Apple (or Microsoft and Google, in the case of Blink) would be able to collaborate any better. Meanwhile, Mozilla no longer allows embedded Gecko, and warns that doing so will mean not getting the latest security fixes:
For a long time, Microsoft made a strategic decision to ignore the web, and that hurt a lot of things. And Microsoft has privileged archaic Intranet sites over progress on the web ever since, which hasn't been great either (unless you run one of those Intranet sites). But the idea that Microsoft doesn't have the resources to maintain a browser engine is ludicrous -- they're Microsoft. And they're already ahead of one of the suggested alternatives, WebKit. (And WebKit is probably the one of the three that's least tied to a specific browser implementation, so it's the one most amenable to having been picked in place of their Trident revamp.)
I'm working on a web app that uses typed arrays heavily.
Chrome runs my test case in 2.7 seconds, Firefox in 3.2, and IE Edge in 23 seconds... (ie 11 in 30+)
Now to be fair i developed "for" firefox, checking that it worked in IE and chrome the whole way, but that is a pretty large performance difference.
Plus i've always hated benchmarks. They are so easy to "game" without real improvements in actual code. They are great for development and regression testing, but when used to compare engines i've found they are nearly useless.
All that being said, I applaud the IE team for really sticking to their promise of improving the browser and it seems they are going to give Chrome and FF a run for their money soon.
This thread isn't about IE in edge mode it is about the Edge browser shipping with Windows 10 (that replaces IE). IE in edge mode and the Edge browser have different performance characteristics.
I'd be interested to see how your test case performs on the Edge browser.
It is confusing when you add IE because IE 11 has an edge mode (essentially their newest rendering mode). IE 11's edge mode and the Edge browser share a handful of components, but they're selling Edge as a brand new browser with a lot of cruft removed (backwards compatibility) and other improvements which IE won't ever get (11 is the last version).
Awesome, I'll remember this. It still pisses me off that they named it that. It's just going to lead to a bunch of annoyance and confusion. IMO something One related would have been nicer, but I'm sure they know what they are doing.
> Chrome runs my test case in 2.7 seconds, Firefox in 3.2, and IE Edge in 23 seconds... (ie 11 in 30+)
I think you are doing something wrong. I have a light app and very heavy SPA (written really badly with almost 5mb of code, D3 + Highcharts). Still, both app loads in less than 4 seconds in IE 11, FF, Chrome and Safari.
It's not page load, its the actual application running.
It does some fairly heavy image manipulation in web workers using typed arrays. The app loads a little faster in IE (the code comes in just under 1mb), but the runtime of my few test cases are magnitudes slower.
Issue #1: performance claims based on tests from one device configuration. The best performance data is gathered from as many sources as possible. In the spectrum of statistics, one data point is not enough.
Issue #2: "Personally I have found the Peacekeeper results to be a reliable measurement of web browsers performance." Is there data to back up this claim?
* Its layout benchmarks do not really stress layout and instead either stress basic painting operations or DOM accessors. Most pages do not sit there calling style.top in a loop over and over.
There is no reason to think that Peacekeeper's JS benchmarks are particularly better than V8's; in fact, they're probably worse, due to the proliferation of microbenchmarks. You'd get about the same effect by going to jsperf.com and clicking around.
After sadly verifying Microsoft's claims, author goes on to find third party benchmarks where they'd lose to make the point he initially wanted to make.
'It seems like Microsoft has been targeting their optimization effort for the competitor benchmarks in order to show impressive results for their new product. When it comes to more intensive and complex HTML5 benchmarks they are still miles behind the competition.'
I think instead they worked in JavaScript performance, so in js benchmarks like sunspider and octane they do well. Meanwhile their page loading was not as optimized, so benchmarks that test loading whole pages such as peacekeeper don't perform as well.
Peacekeeper measures your browser's performance by testing its JavaScript functionality. JavaScript is a widely used programming language used in the creation of modern websites to provide features such as animation, navigation, forms and other common requirements. By measuring a browser’s ability to handle commonly used JavaScript functions Peacekeeper can evaluate its performance.
This kind of reflexive cynicism is annoying and pointless. Especially because we can just as easily flip it around and accuse Microsoft of cherry picking and optimizing for specific benchmarks.
Neither of these accusations are in any way constructive. I'd be much more interested in real world examples where one browser or another is noticeably slow.
While it is obvious that the author was hoping to discredit microsofts speed claims, the result on running the original tests was that an agreement on sign but not on magnitude (assuming the uncertainty on the MS claims is entirely in the last digit presented).
The new tests don't seem particularly hand picked given that the are among the top results I get when I google for "Browser speed test", arguably a sign that they aren't overly obscure.
As an aside, all the tests benchmarks are made by parties other than microsoft so they should count as third-party. And their being third party is typically seen as a virtue due to the tendency of vendors overturning on their own benchmarks.
So while this was obviously a bit of hostile benchmarking, it doesn't seem like there was any need for special effort to get a discrediting result, simply benchmarking with other suites proved enough. The fact that the author is clearly hostile to microsoft does not in itself disprove the results presented, and is not ground for dismissing them.
Looking around the site and other pieces written by the same nick, I find it unfair to label him as "Microsoft hostile". After all, he has written a quite favorable review of a cheapo 7" Windows 8.1 tablet.
> What to make of these results? It seems like Microsoft has been targeting their optimization effort for the competitor benchmarks in order to show impressive results for their new product. When it comes to more intensive and complex HTML5 benchmarks they are still miles behind the competition.
His conclusion stands, he's got 1/2 of the claimed perf gains from the using the benchmarks the Edge Team advertised, but when tested against other Independent benchmarks Edge's performance falls dramatically short, making it reasonable to deduce Edge is optimized around the benchmarks the PR team is advertising.
It's unclear how these additional tests were selected. If the criterium was to find a benchmark where Chrome performs better, then you can't conclude much from this.
The IE team has said that they prioritize features based on how they are used on the most popular websites. I'd like to see a benchmark that was just the wall-clock time to load the Alexa Top 1,000.
Not sure what the point of this is. Microsoft never argued that Edge is faster than Chrome at everything. They specifically said it was faster for certain benchmarks.
But the author is upset because it isn't faster for the specific benchmarks he prefers. If it's slower at Peacekeeper does that specifically mean that the browser is now slow ?
And then weirdly Mint/Linux comes into the argument as though it has any relevance to Edge. Nobody is switching operating systems because of browser microbenchmarks.
The post 1st validates Microsoft claims whether they hold. His tests show that Edge is faster on SunSpider/Octane, but not by the margin that Microsoft suggested. The article also points out that old HW is used as the test bench and that HW will make a difference.
Then the article expands the scope to other benchmarks, and there is a (subjective opinion) that Peacekeepers results hold in real world.
I can't see where the author is upset. Rather I see a typical geek hobbyist experiment and the results posted to a FORUM community - not to a professional or scientific publication.
I read it as - the browser is fast at its core but still needs to optimize for the advanced HTML5 features. I guess it will get better over time. This is a good start. No matter what people think, this is a great step forward. Web as a platform is only as open as the variety of rendering engines providing standards compliant features are. If all the browsers are standardizing on the same rendering engine, it's not the web platform that is standard but that particular rendering engine. Even if it is open source, having multiple competing implementations supporting the same standard is a good thing. Especially when companies would like to push for features that benefit them at the expense of overall community.
I really hope browser races doesn’t get boiled down to performance, but also things like resource-hogging, which has made it practically impossible for me to use Chrome on my MacBook.
If Edge can be to Windows 10 what Safari is to OS X, I’ll try to do what I can to move my browser workflow to Edge as much as possible.
the only browser choices these days seem to be either from unethical corporations or unethical open source non-profits. anyone know where to turn to get a decent browser with no ads or unethical corporations involved ?
"Also note out previous tests with Linux Mint vs Windows 10 which suggest that Chrome actually runs faster on Linux than it does on Windows."
Honestly, any test between a microsoft product and a competitor run on microsoft's operating system has to be viewed with a grain of salt. What's interesting here is the degree of trust given to a vendor that has tried to rig/break even hardware to lock out competition: http://antitrust.slated.org/www.iowaconsumercase.org/011607/...
Worse below, there are people simultaneously believing in a vendor that has long track record establishing their "rig the game" overarching corporate strategy and dissing the blog author for doubting the vendor's claims, even to the extent of accusing the author, who checked only two HTML5 benchmarks, of going on a witch hunt to find benchmarks that disadvantaged the vendor.
> "If seems unfortunate if we do this work and get our partners to do the work and the result is that Linux works great without having to do the work."
Maybe I'm being too naive, but this sounds more like he is concerned about giving away the work rather than locking people in.
I would prefer if these kind of initiative are open source but in my opinion each company have the right to release as they please the technology they develop. In my mind, this is the same case that Apple with the Thunderbolt, and I think is fair that if Apple decide to keep its technology as private they should be able to do so.
Absent from both the original blog post and the OP is Mozillas kraken test, is firefox simply dismissed as slow these days, and their benchmark just discarded to the annals of time?
From arewefastyet.com they seem to be doing ok, (even if they keep testing some really slow chrome version for some reason) and so it seems strange they are being left out.
Yeah it seems rather ridiculous to me that Firefox isn't mentioned here -- Firefox consistently beats Chrome on the three major benchmarks (SunSpider, Octane, and Kraken).
The SunSpider benchmark is bullshit, and you shouldn't take anything from it. In particular, Chrome stopped optimizing for SunSpider with their last JIT, as they optimized for longer running apps (such as ones you'd find on the web) but took a hit on SunSpider.
But SunSpider is a really bad benchmark anyway. All benchmarks are bad and suffer real problems, but SunSpider more than most.
reply