Please don’t do this! The only way of properly sending your browser code that it needs (like polyfills) is if we can accurately rely on your browser’s name and version string. I can’t tell you how many browsers I see that spoof their user agents and causing us to misidentify their list of implemented features.
User agent isn't reliable enough to do this and definitely wasn't intended to be used this way, so you're just asking for brittle code. Supposedly Chrome is freezing its user agent very soon, so it's not even a good path going forward.
They're freezing everything but the significant browser version[0], which would still allow feature detection. Additionally, why do you say it wasn't intended to be used that way? As I recall, it's been used to identify supported features since it's inception; user agent strings for non-dominant browsers often contained the user agent strings of dominant browsers to spoof servers into sending features that they thought only specific browsers could support.
I do think in special cases you should use the user agent to send proper code, but most businesses probably don't need this today.
With https://caniuse.com/ and a good knowledge of the shape of your traffic, it no longer seems critical for .05% of users some how on IE 6 still that visit your site to have all the eye candy.
Now if you're a government site I think you should be taking the time to ensure as many people as possible can access your site bug free, but that could mean just make sure it is dead simple. If you're a large business where .05% of traffic is a few million $ of lost revenue, yeah go a head hire those engineers.
For the rest of us just let the eye candy fail, get the site to work and forget about it.
Sorry for the long post; I'm half writing this for you and half writing this for everyone else who's responded.
I think in our case a lot of our decisions are based on two success criteria:
1. We want our developers to be able to use language features (like Promises, Maps, and Iterators) that make development easier.
2. We need to pick a solution that offers the best performance in the browser. We run an e-commerce marketplace; we pay for every additional byte we send over the wire in our conversion metrics, whether it's in the short term or the long term.
It's not really reasonable for us to only write JavaScript that works for the lowest-common-denominator of our traffic (we still support IE11!) but at the same time, we also can't drop support for them. So, we have to partition our traffic in order to allow modern browsers to skip the polyfills they don't need, while supporting the older browsers that need them. There really isn't a better way to do this than serer-side parsing of the user agent string. Identifying features in the browser means that we have to incur another round-trip, which delays the execution of any of our other JavaScript and hurts usability metrics like Time To Interactive[0]. I have to plug Polyfill.io[1] here; their service is open-source and works extremely well.
And as far as whether this is an anti-pattern or not, it's something that works really well for us. We've implemented both a general polyfill and a user-agent-specific polyfill solution, and there were in fact small performance benefits in the latter, with no cost to conversion.
Plus, whether user-agent parsing is an anti-pattern or not, it's the state of the world. As I've already mentioned above, we don't gain much by avoiding this anti-pattern, so what's our motivation to change our implementation? As a challenge, I'd encourage you (or anyone reading this) to try spoofing your browser's user agent to be IE11. You'd be surprised how little of the internet works, even on sites that claim to support IE11.
Relying on the UA string for browser detection is an anti-pattern. Instead, you should do feature detection. Modernizr [1] is nice, but you can also just do it yourself. Look into CSS @support too.
It looks like using an approach like Modernizr relies on downloading the detection code, running it, and then potentially kicking off additional downloads for polyfills. Or it would mean eagerly downloading polyfills on browsers that don't need them. In the former case, you pay the cost of a second network request, especially in older browsers that don't support HTTP/2, and in the latter case, you send a potentially large number of bytes of polyfill code to clients that don't use them. What makes using a browser's agent string an anti-pattern?
You don't really have much of a guarantee about anything from the client when they make a request. You have to take them at good faith when they report a user-agent to you. Plus, from real-world experimentation, I'm relatively confident that the performance benefits of the solution I mentioned outweigh the cost of supporting browsers that spoof their user agents:
Both is a better option; sending polyfills for every browser feature you'd want to use is easily hundreds of kilobytes. We use user agents to identify a feature set, but we send feature detection down alongside the polyfills. Detecting features in the client side and sending a second network request to load polyfills for that features adds pretty significant round-trip time.
reply