For those who don't know (or belong to a newer generation), Microsoft's IE (Internet Explorer) was one browser that had almost monopolized itself globally for many years mostly during the early nineties decade. They managed to do it but the backlash was such that the public pressure practically caused a storm that made Bill Gates sit in front of the jury and answer some hard questions.
And today, I'm seeing history is repeating all over again, only Chrome has taken IE's place but there is no storm coming this time. Unlike our predecessors, we are giving in to convenience, we are acting like its not a botheration at all.
PLEASE DON'T DO IT. Consider the sheer browser market share that Chrome enjoys and Google's data interests for a moment, and it might already be too late for us. If a web service insists only on Chrome for best performance then change that service, not your browser! If Youtube gives issues to you then use Vimeo or Lbry instead. Let's unite in this effort and take it seriously, let's bring more diversity to the browser world.
There is no reason why diversity in the market should depend on diversity in engines, other than this is frequently used by engineers as an argument.
Engines are about as relevant to diversity in the browser market as semiconductors are to diversity in the phone market. It's irrelevant for the end user experience, and arguably a waste of resources and time.
Browsers which can't compete on actual features and services that matter to users and instead try to distiniguish themselves through technical implementation details will probably die out, as the marketshare of Firefox shows.
The problem is that it's bad for standards. We're currently in a situation where standards are basically completely irrelevant, and the only thing anyone cares about is support in Blink. This is basically where we were with IE6 a few years ago.
These days the engine authors push through features as standards, not a committee specifying features on paper that are then implemented by engine authors. So yes, more than one engine would mean the "living standard" is not specified by a single engine, whose quirks then become "standard".
I agree with your stance but Edge is already like the new IE. Chrome is more like old Firefox being open source with much higher market share. If you work in a hospital setting Edge is still the most popular and has different features that only Edge supports at times.
Before blindly jumping on Firefox bandwagon, think about the fact that Firefox exists only in shape and form Google lets it, as it contributes almost all (as in >90%) of Mozilla’s revenue through the search affiliation deal. Believing that an entity sponsored by Google will save us from Google dominance is paradoxical. Instead how about finding a different businesses model for browsers like a premium browser that would truly allow it to be independent.
that may be so (although i thought that firefox has diversified recently and doesn't depend as much on google anymore) but:
who else is going to do it?
firefox still has the best chance.
the more popular firefox gets the easier it should be to get alternate funding.
a premium browser would not sell. browsers are like operating systems. they only provide value through the content snd applications they enable. browsers are no longer interesting in themselves. noone actually cares. the only thing that matters is, how many websites are not broken.
that's why edge is building on chrome. it's the only way to ensure that they stop breaking websites.
brave has an interesting business model (and i am not saying i like it) by motivating its users to pay for content and taking a cut.
Will Google renew the deal? It seems to me that mozilla is trying to move the "premium" direction you are suggesting. They are building services that could be monetized. But it might fail pretty hard because it's pretty different model.
Google paying Mozilla hundreds of millions of dollars isn't exactly a charity. Google care very much about the complete and utter ubiquity of their search.
Using Firefox more will put Mozilla in a stronger position to negotiate a better deal, whether with Google or someone else. Worst that could happen is Firefox turning into some kind of boutique browser for connoisseurs willing to pay. That's uncompetable.
They might be able to raise revenue using other innovative methods eventually, but that won't amount to anything if we aren't using the hell out of their (very good) browser.
> Google care very much about the complete and utter ubiquity of their search.
Google doesn't even need Mozilla to increase/maintain their search market share. When Firefox's default search engine was changed to Yahoo! in the U.S., many users switched back to Google [1]. Also, people who care about privacy use things such as DuckDuckGo anyway.
This increasingly happens to me. It won’t make me switch to Chrome but it’s an extremely worrying sign. I remember the New Yorker subscription page was totally broken in Firefox, and have had issues with login for my credit card
Having been using FF since it went out, I have recently - maybe starting in 2018- been finding some broken sites here and there, mostly new flashy sites, but I found some government, university related manually developed sites with issues in FF.
I've reported a few, and yes, they tell me, the devs do not test their code for Firefox and yes, since then I started to ask about the devs testing here and there, and more and more dev teams are not testing code for FF compatibility.
I know you can not take my word for this, please consider it an opinion, I think FF has started to be phased out in web development, and its compatibility relays mainly now in highly used - fully compatible with FF - frameworks like Wordpress and similar.
Please don’t do this! The only way of properly sending your browser code that it needs (like polyfills) is if we can accurately rely on your browser’s name and version string. I can’t tell you how many browsers I see that spoof their user agents and causing us to misidentify their list of implemented features.
User agent isn't reliable enough to do this and definitely wasn't intended to be used this way, so you're just asking for brittle code. Supposedly Chrome is freezing its user agent very soon, so it's not even a good path going forward.
They're freezing everything but the significant browser version[0], which would still allow feature detection. Additionally, why do you say it wasn't intended to be used that way? As I recall, it's been used to identify supported features since it's inception; user agent strings for non-dominant browsers often contained the user agent strings of dominant browsers to spoof servers into sending features that they thought only specific browsers could support.
I do think in special cases you should use the user agent to send proper code, but most businesses probably don't need this today.
With https://caniuse.com/ and a good knowledge of the shape of your traffic, it no longer seems critical for .05% of users some how on IE 6 still that visit your site to have all the eye candy.
Now if you're a government site I think you should be taking the time to ensure as many people as possible can access your site bug free, but that could mean just make sure it is dead simple. If you're a large business where .05% of traffic is a few million $ of lost revenue, yeah go a head hire those engineers.
For the rest of us just let the eye candy fail, get the site to work and forget about it.
Sorry for the long post; I'm half writing this for you and half writing this for everyone else who's responded.
I think in our case a lot of our decisions are based on two success criteria:
1. We want our developers to be able to use language features (like Promises, Maps, and Iterators) that make development easier.
2. We need to pick a solution that offers the best performance in the browser. We run an e-commerce marketplace; we pay for every additional byte we send over the wire in our conversion metrics, whether it's in the short term or the long term.
It's not really reasonable for us to only write JavaScript that works for the lowest-common-denominator of our traffic (we still support IE11!) but at the same time, we also can't drop support for them. So, we have to partition our traffic in order to allow modern browsers to skip the polyfills they don't need, while supporting the older browsers that need them. There really isn't a better way to do this than serer-side parsing of the user agent string. Identifying features in the browser means that we have to incur another round-trip, which delays the execution of any of our other JavaScript and hurts usability metrics like Time To Interactive[0]. I have to plug Polyfill.io[1] here; their service is open-source and works extremely well.
And as far as whether this is an anti-pattern or not, it's something that works really well for us. We've implemented both a general polyfill and a user-agent-specific polyfill solution, and there were in fact small performance benefits in the latter, with no cost to conversion.
Plus, whether user-agent parsing is an anti-pattern or not, it's the state of the world. As I've already mentioned above, we don't gain much by avoiding this anti-pattern, so what's our motivation to change our implementation? As a challenge, I'd encourage you (or anyone reading this) to try spoofing your browser's user agent to be IE11. You'd be surprised how little of the internet works, even on sites that claim to support IE11.
Relying on the UA string for browser detection is an anti-pattern. Instead, you should do feature detection. Modernizr [1] is nice, but you can also just do it yourself. Look into CSS @support too.
It looks like using an approach like Modernizr relies on downloading the detection code, running it, and then potentially kicking off additional downloads for polyfills. Or it would mean eagerly downloading polyfills on browsers that don't need them. In the former case, you pay the cost of a second network request, especially in older browsers that don't support HTTP/2, and in the latter case, you send a potentially large number of bytes of polyfill code to clients that don't use them. What makes using a browser's agent string an anti-pattern?
You don't really have much of a guarantee about anything from the client when they make a request. You have to take them at good faith when they report a user-agent to you. Plus, from real-world experimentation, I'm relatively confident that the performance benefits of the solution I mentioned outweigh the cost of supporting browsers that spoof their user agents:
Both is a better option; sending polyfills for every browser feature you'd want to use is easily hundreds of kilobytes. We use user agents to identify a feature set, but we send feature detection down alongside the polyfills. Detecting features in the client side and sending a second network request to load polyfills for that features adds pretty significant round-trip time.
This is not true. Firefox has never used a Chromium backend.
One of the big arguments for using Firefox is that it encourages diversity because a huge handful of "alternative" browsers (Opera, Brave, etc) use a Chromium backend
One thought I've been having recently is that WebKit/Blink is almost like the new Linux. If we take the premise that the Operating System moved up the stack to the browser, then browsers like Chrome and Edge are "distros" of Chromium. Firefox is like the BSDs.
"For those who don't know (or belong to a newer generation), Microsoft's IE (Internet Explorer) was one browser that had almost monopolized itself globally for many years mostly during the early nineties decade."
Internet Explorer didn't exist in the early 90s. It was released in 1995.
Yeah, it made it impossible for me to take the OP seriously when they weren’t even around in the timeframe they’re referencing. Nor did they do the research to get it right. Real crusader putting in the work.
Are you sure it's not just a habit? I've been using FF devtools for few years without issues. There are even some things chromium devtools dont have. Only time i use chromium devtools nowdays is for testing/audit (lighthouse).
My experience is that Chrome's JS dev tools are generally better (and faster) but CSS and the network inspector are better in FF.
The other day a junior teammate requested my help with something he couldn't understand: a file upload feature stopped working in his machine for no apparent reason.
It did not make any sense: the backend was receiving the file but somehow identifying it as zero-length (and hence rejecting it as an invalid file). Chrome said everything was just fine, with the network inspector showing the request but claiming it was CORS-blocked. When I tried copying the request (right-click -> copy -> as curl) and running that, the response was seemingly normal (and included the proper CORS directives).
After some time fiddling with this I decided to try with Firefox. First try, Firefox correctly displayed the request erroring with a "413 Request Entity too Large" code. This immediately prompted me to check nginx's configuration. Of course, the dev had updated his nginx container and it turned out the latest version only allows 1Mb max body size by default.
Tracing back our steps, I realized that for some fucked up reason Chrome was identifying the error properly, but not showing it in the inspector tools. Instead, it showed a made up request with an empty body payload (and that's what it copied for me to run/inspect through curl). I'm guessing it is not intentional that it works this way, but seriously: how hard is it to show the actual request the browser is making instead of a made-up one in the inspector tools? Why on earth would you show something different there?
I probably won't run anything besides Firefox for my web browser. But I see the web as broken beyond repair in regards to the way it is.
So I am more interested in moving away from the web. Things like using terminal programs (youtube-dl, rtv etc.), native GUI apps, or complete alternate protocols like Gemini[1].
I see the web being completely out of my control as it is. I see my usage of it as being more of a virtual machine for the things I need it for (government stuff, communication platforms etc.).
I feel like slowly packing up and leaving.
But I will still try and exert some decision making while I have to use it. And I intend to do this by choosing Firefox-based browsers.
I switched back to FF a few years ago after they came out with the faster engine and never looked back. I'm surprised more webdevs don't use FF. The console, IMO, is way, way better. In chrome, I see an XHR request, I click it, it takes me to another tab and there I have to find the request again (because it doesn't take me to the request, just the tab) to find its details. FF shows them inline with collapsible tabs for headers, content, etc. I checked a couple of weeks ago. Chrome has done nothing to fix this. But they sure have expanded their surveillance.
I don't see myself switching away from Firefox any time soon but the "megabar" fiasco really rubs me the wrong way and I hope it isn't indicative of their direction going forward.
In the past when Firefox has changed default behavior there's almost always a way to revert the changes. They move tabs to the title bar to match Chrome, I change an option that puts them back where I like them. They auto hide the bookmark toolbar, I re-enable it. With the latest Firefox release I've yet to find an easy way to disable the godawful, distracting zoom effect for the URL bar. The about:config options that used to work no longer do. I really hope they reverse course and give us a straightforward way to disable it.
There's a difference between IE and Chrome, the one was completely closed source whereas chrome's engine is open source and there already many distributions including one backed by another major company. If Google takes blink into a direction users no longer want a blink fork and open source distro can be made in the same sense as Firefox. I'd be more worried about the prevalence of safari engine on ios where there are no alternatives.
> If Google takes blink into a direction users no longer want a blink fork and open source distro can be made in the same sense as Firefox.
I'm not completely sure about that:
1. Maintaining a browser engine is no easy task.
2. Where does the copyright reside? For instance, contributing to various open source projects requires signing a CLA so that among other things the license of the project can be changed afterwards.
Definitely not an easy task but my point is that it's not the same dead end situation that happened with IE. During the IE era the browser came bundled with the OS, it was closed source, Microsoft once it squashed competition stopped updating.
About licensing I admit I am no expert but somehow brave and edge use the engine right now without going into legal troubles.
The IE monopoly era (with its drawbacks) was in the 00's with the infamous IE6[0]. The lawsuit you mention is unrelated, it was because of the bundling of IE with Windows, notably in Windows 98 a few years before.
I believe thinking FF vs. Chrome like IE vs. other browsers is a bit unfair here. IE was a dinosaur, a complete fiasco as almost unmantained software.
Now we have this software, open-source Chromium actively mantained, supported by several big entities (think Chrome, Brave, Edge, Opera, Vivaldi, and maybe thousands of smaller entities, think mobile browsers), it works ashtoningly well, it keeps beind developed at fast pace, continously improving, and then we have Firefox, its open sourced-code almost non-in-current use by third parties at all and supported by a relatively small company. I think Chromium isn't going to be like IE anytime soon.
Moreover, we have the speed of browsing, rendering in Chromium vs. Firefox
I just recently jumped the shark and started to used Chromium as my main browser, previouly Firefox had the lead.
Speed is the reason. Chromium in default setup and with tons of extensions enabled is a lot faster than Firefox.
I didn't do it lightly, I still like a lot the Firefox GUI (and the extensions are far more usable, think Powertabs), but I've been reading the Phoronix benchmarks between FF and Chrome for a year now, and decided to give Chromium a chance.
It went really well, I didn't wanted to go that much well! I wanted to use FF, but after trying Chromium for a couple of weeks I can tell, its speed is fairly user-noticable, it's amazing how much faster than Firefox it is.
I didn't wanted to believe it, but the Phoronix benchmarks were cristal clear at predicting that: Chromium in a bad day is at least 40-50% faster than Firefox in almost any measurable feature.
Yep, Firefox Webrender, hardware-accelaration enabled, even over Intel drivers (in Linux), with no issues at all, feels a LOT slower than Chromium.
Yes, Chrome and other Chromium browsers are faster than Firefox, but you probably don't need those speed for personal browsing, like reading hn, blogs, news sites, etc.
These days I use Chrome for work-related purpose only, i.e. for accessing webapps. For webapps, browser speed is absolutely matters so I stick with Chrome for now. But for personal browsing like social media, news, and browsing random stuff on the web, I stick with Firefox on all my devices. I don't use Chrome for personal browsing anymore for years and haven't notice any issue related to performance so far, because all stuff that requires a fast browser are usually work-related (at least on my case).
IE had 95% market share at some point, Chrome is nowhere close, thanks primarily to Apple with its Safari browser.
But the trend is worrying, with more and more sites working only in Chrome (or another Chromium derived browser), and many software developers testing only for Chrome.
Not just web browsers, we should also use alternative online platforms (e.g. social network), services (e.g. email), and system (e.g. operating system, especially for mobile devices)
Using "not-best" or non-mainstream alternatives often means you enjoy less network effects and the produce/service often has more rough edges. But diversity brings competition and freedom of choice.
Last few weeks, people talk about how the apple app store becomes monopoly. I believe one service can dominate the market only if we are willing to allows it.
I wish others can consider not just the UX and cost when choice a service / product.
The best product isn't necessarily being the best choice for you.
And today, I'm seeing history is repeating all over again, only Chrome has taken IE's place but there is no storm coming this time. Unlike our predecessors, we are giving in to convenience, we are acting like its not a botheration at all.
PLEASE DON'T DO IT. Consider the sheer browser market share that Chrome enjoys and Google's data interests for a moment, and it might already be too late for us. If a web service insists only on Chrome for best performance then change that service, not your browser! If Youtube gives issues to you then use Vimeo or Lbry instead. Let's unite in this effort and take it seriously, let's bring more diversity to the browser world.