In general I like more the classic/old way of versioning the releases, like for example "1.2.2" which has some embedded meaning of what is changing (e.g. <major_change>.<minor_change>.<bugfix>) which hints at how potentially dangerous an upgrade might be or if some new exciting stuff has been included. Jumping directly from 10 to 11 to 12 leaves me clueless without reading all release notes :(
Neither says anything useful. 2019-10 is no more actionable or informative than 76.0.0. That 2019-10 was released in october 2019 is meaningless information and only "more" in the "babble" sense.
Well, I don't get an idea if my extension would break changing from version 2019-10 -> 2019-11 without reading the release notes.
I would know it there is a big chance of breakage if I moved from 76.0.0 -> v77.0.0 and I am reasonably certain it won't break if I moved from 76.0.0 -> 76.0.1
But if a website breaks, you can get more idea about why it happened by comparing your current browser's calendar version to the latest update. For example, you will know you are behind x years of updates. With semantic versioning, you don't know how old you have not been updating your installation.
Since there are more websites than extensions, the utilitarian approach would be to choose 2019-10 over 76.0.0.
One approach is proactive and the other reactive. Unless, one checks the release notes every 4 weeks... I would argue the utilitarian approach is exactly the other way. Verify and test if there is an API break with a major version change. Whether the browser is 2 years or 10 years old shouldn't make a difference.
And this supposed marketing advantage only lasted for the first handful of releases, when numbers where under or around 10. Now numbers are so big that they have become totally insignificant. 57? 63? 98? who cares any more?
Furthermore, as another comment said, it is impossible to remember on which release they started breaking this or that, since there is no more major version number change, which is usually handy to indicate backward-incompatible changes.
They increase the major version because they don't guarantee backward compatibility. You may think it's foolish to only provide guaranteed API support for 4 or 6 weeks, but it's more than just a marketing gimmick.
The reality is that you must read the release notes for this to be a strategy that can even remotely minimize your risk.
On one side, relatively minor upgrades can have huge impacts for some use cases. Some changes are riskier than others, but SemVer and major/minor distinctions are not reliable signals for anything critical.
More important though, if you're skipping browser updates at all, you need to be tracking the security implications of every patch you skip. Otherwise you're increasing your risk by an order of magnitude more than you'd reduce it by deciding a change might be too buggy based on a numbering scheme.
Aside from versioning being clear about how big the change is, Windows Installer has this [0-255].[0-255].[0-65535] limit that I hope people do not learn about in the hard ways.
It's not so fun to work around because Windows Installer tries to be smart about things, as opposed to the others that take a more robust approach.
There's an incidental benefit here. There's at least one system (I think Duo + Okta?) which, when configured to check if your browser is out of date, makes a logic error. It should check if your browser is the most recent, and, if it isn't, if the _new release_ is over N days old. Instead, it checks if your browser is the most recent, and if it isn't, if the _old release_ is over N days old. So, if you run Firefox and you get it from your Linux distribution, that breaks during the two days or so in between a new release and e.g. Fedora shipping it.
Must be fun for ESR users, because ESR reports the base version regardless of when it was patched. So ESR 60.9 (released alongside 69, still supported), would look a year out of date.
It's Duo that has the "Software Update Notification" feature ( https://guide.duo.com/software-update ). If more vendors go the direction of faster updates, that feature on Duo (and I'm sure others) will need to be tweaked heavily.
> In recent quarters, we’ve had many requests to take features to market sooner.
Surely not from the users.
> Feature teams are increasingly working in sprints that align better with shorter release cycles.
Poor guys. At least the process “guarantees” long term quality as the result.
> Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements.
Translation: marketing managers want to change their minds faster than up to now.
I’d prefer more stable, less CPU and battery hungry browser than one that shovels to my desktop the newly decided features from the marketing as soon as possible. But it’s just me.
Your first comment implies you speak for all users; certainly you don't.
Your second comment is antagonistically sarcastic.
Your third comment is easy cynicism. As a counterpoint, Firefox does put a lot of effort into stability, reducing CPU power (read the recent change on OSX in that regard). You seem to be suggesting a shorter release cycle will prevent them from continuing that work.
I don't. I speak just for all the users that don't benefit from the more frequent release cycle.
Note that the more frequent release cycle doesn't mean that any development will take less (especially absolute) time. It can even take more, as more frequent release cycles mean that administrative work related to every release increases.
> You seem to be suggesting a shorter release cycle will prevent them from continuing that work.
No. Again you construct what I haven't said, because you don't have valid arguments against what I've actually said.
I'm claiming that the total work by both the developers inside of Mozilla and by all the users who have to respond to the event of the new release increases with this management decision. And I quote the argument of the management, written in the release, which I consider to be weak compared to the said costs:
"Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements"
People say they want better software internals and don't care about visual improvements, but people are empirically wrong about what they say they want:
>I wouldn’t be surprised if these are the same people who complain, “Why does Microsoft spend all its effort on making Windows ‘look cool’? They should spend all their efforts on making technical improvements and just stop making visual improvements.”
>And with Calc, that’s exactly what happened: Massive technical improvements. No visual improvement. And nobody noticed. In fact, the complaints just keep coming. “Look at Calc, same as it always was.”
> I’d prefer more stable, less CPU and battery hungry browser than one that shovels to my desktop the newly decided features from the marketing as soon as possible. But it’s just me.
I'd love for the recent changes to rendering which fix long standing performance issues on OSX to hit release as soon as possible. https://twitter.com/whimboo/status/1168437524357898240 . Many users are running nightly right now to take up these changes because they are so valuable. As a user, I'm requesting Firefox take features to market sooner :)
> Poor guys. At least the process “guarantees” long term quality as the result.
Philosophical question: does letting code incubate for a longer period of time with the same test/user load (same nightly/beta users, same integration test suite) really enhance stability?
> Translation: marketing managers want to change their minds faster than up to now.
This is just blanket cynicism - do you have insight into the Firefox development process that suggests this is true?
I actually took those changes in 70, so they should make it to the release version at the end of October. If you can, please try using 70 Beta and see how the power use improves!
For comparison, Chrome updates "every two-three weeks for minor releases, and every six weeks for major releases".
I guess Beta is moving to daily releases? I suppose it's good, generally I have to restart Firefox once a day for resource leaks and restarting through the update UI is easier than opening the browser console.
I was using Nightly for a while but it turns out that even daily updates aren't fast enough for that. If a new patch breaks something, typically the offending patch isn't backed out and it takes 6-12 hours to get fixed, meaning the fix isn't available in the next nightly and you have to suffer for several days.
I looked a while ago and it seems to be due to some of the webextensions I have installed, probably uBlock Origin but I can't be sure (28 extensions). Basically some things get cached that don't need to be, and then the cheap 5400rpm disk drive makes the browser stutter swapping between tab content processes.
I'm waiting for the hard drive to die so I can justify getting a new SSD or laptop, which I'm pretty sure will solve the problem, so no, I haven't filed a bug.
It's the largest extension I have installed, 2.5 MB. It could also be Stylus or Tampermonkey, but I saw uBO in the performance profiles I made, running the JIT and blocking some threads.
The main issue is capturing a decent profile, the lag caused by the profiler makes it hard to trust anything.
> We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner.
Mozilla has been tweaking their process forever, but all of it is for naught if they don't work on what users want. The latest FF brags "we are shipping a new “New Tab” page experience that connects you to the best of Pocket’s content", while standard keyboard shortcuts have been broken for well over 10 years. This is not a problem with their release cadence.
Users have a curious way of asking for what they believe is feasible. I've seen users encounter an obvious bug, and work around it by hand, and turn around and ask for something completely different, like an alternative workaround. (Just add a macro system so I can record this sequence of 7 steps to work around this weird thing...) They literally don't think to ask "Fix this bug".
I wonder if that's what's happening here. As an occasional Firefox user, I know that keyboard shortcuts have been broken since the Bush administration, and the codebase is far too complex for me to work on even the simplest features (I've tried). I can absolutely see a user thinking "If they can't do what we want, at least they could do it faster". That feels like a change that's feasible in a way that "Fix the massive complexity problem" or even "Fix keyboard shortcuts (which are apparently massively complex)" do not.
Firefox itself was created by a couple programmers chatting late at night in a Denny's. They made a far better browser than the Mozilla organization of the day could. It's an important organization and it's impressive they can regularly ship useful free software, but user interfaces have never been a strong point for open source, in the way that compilers and runtimes and databases are. Tweak away, but I don't hold much hope for the situation to improve. The next great open-source browser is going to come from 2 or 3 people chatting in a Denny's, not Mozilla with N-week releases, for any value of N.
I don't know what cmd means (I'm kidding, I have seen Apple computers even if I've never used one), but you can find out what the keyboard shortcut is on your own copy of Firefox by hovering the mouse over the Reader mode icon. On my Firefox it says "Ctrl + Alt + R".
They added reader mode to firefox for android in 2012 it took them 3 years to add this to desktop firefox and almost a year to actually add a hotkey that you can't configure.
Some shortcuts I use -- like Alt-$NUMBER to jump to a specific tab -- are frequently overridden by websites, which I view as a browser breakage because this should simply not be allowed.
There may have been a late night Denny's involved, but the rendering engine inside the original Phoenix/Firefox came out of the existing Netscape codebase. And that had thousands and thousands and thousands of person hours invested in it, by a largish engineering org, before Firefox saw the light of day.
> They made a far better browser than the Mozilla organization of the day could
I'm guessing this is not the case, because the demands and requirements of browsers would have increased massively since then in terms of the breadth of the developer APIs and security issues that need to be dealt with, plus an every increasing demand on speed and having the largest company in the world with effectively unlimited cash, the best minds and some sharp practices as your competitor.
I think this is sad news, but that's because I view the current "rapid release" trend and being, on the whole, a bad thing for everyone. It decreases software quality, decreases stability, and increases hassle and stress for everyone from developers to users.
If releases are painful, then something is wrong with your development cycle and deployment process. If you spend the effort to make it painless, then it makes no difference whether releases are 6 weeks or 4 weeks.
Of course, you need an extensive battery of unit tests and integration tests, and lots of user testing in beta. But actually if you're only deploying smaller self-contained changes then there's less moving parts at any given time. You'll have stronger guarantees of less interactions.
You're addressing the pain for the developer but not the pain for the user and society more largely. The user (who may be on a slow internet connection) has to deal with the bandwidth of the more frequent downloads. Society has to deal with additional network congestion costs and electricity usage.
True, but by using binary deltas you can minimize that cost. And failure to upgrade may mean running more inefficient algorithms, wasting more processor cycles, or transferring more data in general. For instance, if you are on a browser that doesn't support Brotli compression then you'll miss out on the dramatic compression improvements provided by that algorithm.
It also means more effort for people who have to validate the changes downstream for potential problems - e.g. extension developers or package maintainers.
In my experience I'm more likely to be running more inefficient algorithms, wasting more processor cycles, or transferring more data in general if I do upgrade software than if I don't.
A faster release cycle done right mean fewer features are added per release -- you are releasing 4 weeks of engineering work instead of 8 weeks or whatever. This can make bugs less likely if done right. It also makes it a lot easier to react quickly to bugs that do get released, and if you have to roll back you are rolling back fewer new features.
If you are in a position to release every 4 weeks, then it doesn't matter. Release faster. A slower release schedule doesn't mean you have more testing time. It means you tend to release more in one feature. Further, because it takes longer to release, you run into a few problems.
1) People rush to get things into release because if they miss the release, they are waiting longer to get it out. If the release is every 4 weeks, a missed release is not as big a deal.
2) Long release cycles means things done early in the release get forgotten, so when it's finally released, they tend to be off people's minds.
3) Larger releases become harder to update and deploy. Larger file sizes, a lot more changes all at once.
As someone who has done both (long and short release schedules) I can't imagine going back to long release schedules. It has a lot more complexity and is a lot more error prone than a shorter release schedule. Not only do you get the practice, but people are more inclined to wait until next release to get it right.
> A slower release schedule doesn't mean you have more testing time. It means you tend to release more in one feature.
Plus, most of the serious bugs are found right away when a new group of users starts testing, e.g. when Firefox Nightly version moves to Beta or Beta moves to stable Release. Firefox Nightly, Beta, and Release channels have very different user populations. They have different hardware (Nightly users are power users with big, fast machines), software (Release users are more likely to use anti-virus software or not have the latest GPU driver updates installed), and browsing usage.
Mozilla's studies of crash rates and bug fixes for Firefox Beta showed that most of crashes and bugs are seen in the first two weeks or so. Having more beta "bake time" with a very long beta cycle has little benefit after the first batch of bugs are found and fixed.
An interesting historical tidbit: about 60% of Firefox Beta users are in India and Indonesia. That is definitely not representative of Nightly or Release user populations! The apocryphal story is that many years ago, a local computer retailer started passing around copies of a beta CD of Firefox 4 so people didn't have to download it over a slow or metered internet. And then all those people are still on the Firefox Beta channel.
It actually does the opposite. It's big, infrequent releases that are risky, less stable. Users update automatically; they barely notice it today. Most non techie users have no idea that it is happening. A few years go we had these big bang releases with browsers, operating systems, etc. and things were much worse. Anytime you unleash millions of new lines of code on users it's almost guaranteed that there will be some stuff wrong with it that your months or years of testing did not catch.
Short release cycles mean smaller deltas and it also makes postponing the merging of changes to the next release is less disruptive. Merging things when they are ready instead of merging them to early to not miss some big releases is better.
Another reason smaller deltas is better is that risk does not scale with a linear relation relative to the amount of change but more like a quadratic or exponential relation. So, the less changes you introduce with a release, the less effort you need to ensure all is well. The testing over head increases massively with the amount of change since there are more combinations of things that can go wrong that all need testing. Finally, with the time increasing between the introduction of issues and people finding them, the complexity of analyzing them in the presence of other bugs becomes harder. Shorter feed back cycles are great for finding issues early.
Currently from nightly to release takes about 12- 16 weeks (assuming 6-8 week cycles). With this change, it will reduce to about 8 weeks. That's still two months of exposure to large group of nightly testers and an even larger group of beta testers.
Yes, I'm well aware of the arguments in favor. It's just that I don't see it playing out much in the real world. In my experience (on the whole -- there are exceptions), rapid released software tends to be of lower quality.
Even if that weren't true, though, rapid release means a lot more updates, and updates are a pain in the butt.
Can you give concrete examples? For the software I use that is on a rapid release schedule, (Firefox, Chrome, Dropobox, probably others I'm not even aware of) I don't notice any sort of quality issue. The only "pain in the butt" is having to restart my browser.
+1. Another benefit of rapid releases is being able to release major refactors behind feature gates. This approach was taken when Mozilla shipped an mp4 parser written in Rust [1]. By having rapid release cycles, they could spend a cycle or two verifying that the new component produced identical results "in the wild" before enabling it by default. If they had 1-year release they never would have been able to verify their implementation at such a scale.
As a user I also now need to keep up with the updates more frequently. Either deal with a more frequent risk of getting an update I don't like, or get nagged more frequently with annoying pop-ups to update if I don't want to, or turn off everything and miss out on the critical security updates.
This isn't going to significantly increase the speed at which new features are developed, just the speed at which they're released. The overall risk of you encountering a new feature you don't like won't be higher.
Whoever spends some fixed amount of time evaluating the release will now spend more than 50% time than before.
I’m one of those. If I’m only one of 200 (as in, 199 don't care) and if Firefox has 200 million users, Firefox will waste millions of hours of user’s time per each additional release.
It's not necessarily the number of features but the process of preparing oneself for changes likely to be surprising, unpleasant, require workarounds or length research as to workarounds, etc.
It's not necessarily the number of features but the process of preparing oneself for changes likely to be surprising, unpleasant, require workarounds or length research as to workarounds, etc.
Even if it's just a matter of reading the release notes, having to do that twice as often is a non-zero burden.
Of course there is Firefox ESR but I seem to remember Mozilla's website basically explicitly recommending against it with language along the lines of "This is only for people whose corporate policies require it, don't install it unless you have no other choice."
> process of preparing oneself for changes likely to be surprising, unpleasant, require workarounds
Do you actually do this? Do you think the percentage of users who do exceeds 1%? I mean, I do read the release notes for new web features but that’s because I know from the stats that almost all of our users will be updated shortly after release.
There are two questions you should ask: will this change the odds of a new feature I don't like being developed, and will an extra few weeks mean that they'll do anything other than complain? History suggests the answer to both is no, which means there's not much value discussing it.
Firefox ESR still receives updates on the same schedule as regular Firefox. The ESR updates are just limited to security fixes, so ESR will still download minor updates but you'll only see big changes once a year for major updates (like Firefox ESR 60.8 -> 68.0).
I've always wondered this: why release on a schedule at all? If people demand a certain new feature ("[this will] bring you new features more quickly"), it can just be brought out as it's ready, instead of having to wait an average of 3 (now 2) weeks before it can be released? And why does this have to increment the major version, since when is every single update backwards incompatible?
I think it's more a matter of discipline than anything else. It's easy to get lost in the weeds with a ton of things waiting to be merged, and over and over saying "just one more PR to merge and then I'll release", and then it's a year later and you still haven't released.
Getting features out to people faster means shorter feedback cycles.
Regarding versioning, most applications don't use semantic versioning; that's mainly for libraries. Not sure what "backwards incompatible" would mean for a browser, anyway.
Firefox probably had bad memories of when they "released when it's ready" during the FF3 -> FF4 days where a couple of features held back everything. With fix released cycles, they have more of a motivator to cut off things that aren't ready yet so that the things that are can go out.
Speaking as someone who was at Mozilla at that time, you can change "probably had" to "definitely has". In hindsight I'd go ahead and call it a disaster. The X weeks release cadence solves so many problems.
Remember back then when enterprises certified their entire install base against FF3, but then FF4 came out & they needed to redo all the enterprise certification bullshit.
With shorter release cycles, there's also less pressure to ship half-baked features. If you know the next release is just four weeks away, then slipping a feature one release is rarely a huge deal. But if the next release is one quarter or one year away, teams rush to ship now and fix bugs in production later. Overall product quality will take a hit at every release.
Another thing that I observed in our culture since we switched is that it is easier to coordinate cross-team efforts - if I know X will be done in Fx70, I can plan my work on top of it for 71.
Such logic is commonly performed without releases until you start depending on multiple features in multiple places.
Then it saves you.
The same "alignment" works also much better between engineering and non-engineering teams. Once we collect the focus areas and major features planned for release X, security, performance, accessibility, localization, platform integration, UI, UX, design, legal and marketing teams can prioritize their work to aid them.
Having no release cycle, or long release cycle, dilutes this effort and reduces visibility.
There is, in my opinion, somewhere, a threshold beyond which the release cycle burden outweighs the benefits and/or where the cycle is to short for teams to usefully react, but I don't think we hit it so far.
Totally. I think of this as the "next bus" effect. If you see your bus pulling up, do you run for it? If the next bus is in 5 minutes, no. If it's once a week, hell yes.
Good point, but factor in build/release, then deployment. We all know what some companies are like in their desktop real estate rollouts - if you have too frequent updates, you end up with greater disparity in releases with a template support of - oh that's not the latest, pls try again.
Then feedback, if you rollout release too frequently you also can be several releases in and get feedback of some bug in a release prior to current release, albeit only a week or so. So by having organised releases, allows better management and a better experience overall. It's a fine balance, but for those who needs it, there will always be nightly builds of most things they want with last second updates, also self-builds for those inclined.
But I get the whole artificial timeframe does seem somewhat odd, but the further you look at the overall usage, management and support factors et all, you start to appreciate such regular timeframes.
On the other extreme, I've worked in an environment in which a whole suit had to be retested and go thru all the hopes as a full stop was missed of a comment and that change caused the whole testing, management/release cycle to go thru the motions again. That you would like, ignore and package up with other small things into a full release - which is what many do these days in most environments.
After all - would you want to go thru a rollout of a new release that may contain a fix you want, or equally may just be adding a full stop to a comment in the code and yielding no binary level changes, albeit that it went thru the whole hoops and hurdles as a full release and that is what you got.
So from my experience, a 4 week release cycle is good, fixed, something that you can plan around. Also kudos for not just picking a day a month and doing monthly updates like most, that tends to seem more arbitrary unlike 4 weeks, which has an air of less arbitrary and feels like it was thought out more. Though an element of personal experience does come into play.
> kudos for not just picking a day a month and doing monthly updates like most, that tends to seem more arbitrary unlike 4 weeks
Planning feature development and rollout is more difficult if you always have to refer to a calendar to find future release dates. I would prefer to be able to say a feature can land in Firefox Nightly in January, ship to Firefox Beta on February 1, and Release channel users on March 1.
Mozilla has a long wiki page of past and future release dates:
The problem there is February 1st isn't always a good release day, say if it's a Saturday. So you'd ship on the 2nd or 3rd sometimes... or sometimes that's a holiday, or there are validate reasons to avoid a period (like the first ~week of July or September). You could try to pick something like the "2nd Wednesday" of the month, but there are still occasions for mid-week holidays and other timing problems related to marketing or external events (other announcements, or world events that would clobber your marketing push). Inevitably you have to plan it anyway, so you might as well do that from the start.
Exactly my thought - why 4 weeks? Why not just put everyone on Nightly and be done with it? Then everyone will enjoy the firehose of agility and features that is Mozilla! You're always shipping working code, right? If it passes the harness, it's good, isn't it?
Firefox ships ~100 binaries for 100 different targets, from mobile phones, to desktops, tablets, ... x86, arm, sparc, ppc, riscv, ... X linux, windows, macos, freebds, openbsd, netbsd, .... X arch linux, debian, ubuntu ..
At some point you need to say this is what we want to ship, and branch, and make sure that not only all tests pass, but that the installers work, run the benchmark suite that might not be run on every PR, make sure the updates work on all systems, appstores, etc. Accept new patches to fix bugs, etc.
I suppose they might have a release team that does nothing but releases, and if creating a full release takes a week (no idea how long it takes honestly), then maybe they could release each week.
But I don't think they can release a new version every day, because probably running their whole CI takes longer.
E.g. before shipping a version of Rust, the RC is used to build and run tests of all packages in the Rust package repository. This takes 4 full days. One could probably throw money at the problem and speed it up, but at least right now, releases just cannot technically happen faster, unless you are willing to compromise on quality.
You can notice by looking at the build time in the version string.
This makes it impossible to share a profile between two GNU/Linux distributions (or even the official binary and the distribution binary) without hacking with the current versions, even if they are effectively the same version: one will complain that the profile has been run with a more recent version.
(But I sure expect and want my distributions to rebuild Firefox - I trust Debian for making sure no bit of proprietary code is here, and makes tracking opt-in, more than Mozilla)
Even if you’re right and they’re wrong, the snark was unwarranted IMO. You should either correct the GP or request more information (like a source), not ask rhetorically.
Only massive packages (firefox, chromium, KDE, Gnome, etc.). The entire promise of gentoo is the opposite: fine-tuning and local ~~heater~~ compilation.
My educated guess is that the localization data for every language is pretty massive, so per-region downloads can include the text for their region without dragging along a few extra megabytes of text/images for other regions. If you swap regions after installing it can probably download on-demand.
Historically reducing your download size by a couple megabytes has had measurable positive impact on conversion rates (from webpage visit -> install) so I expect this tradeoff made a lot of sense in the past even if it's less important now.
EDIT: One other thing worth note is that there's zh-CN and zh-TW, and I can imagine the Chinese government being very irritated about the idea of their release having separate Taiwan loc text/images baked in since according to them Taiwan doesn't exist. Major software like Windows has already had CN-specific releases for this (among other) reason(s).
zh-CN and zh-TW has nothing to do with that, and I'm virtually certain there are no "Taiwan images". in applications which provide zh-CN and zh-TW, including Firefox, the former means Simplified Chinese and the latter means Traditional Chinese. theoretically, it could mean Taiwanese Chinese, but then there would also be zh-HK, zh-MO, zh-SG...
`zh-CN` and `zh-TW` are generally just old-school ways to say "simplified Chinese" and "Traditional Chinese" (two similar writing systems). The confusion comes in as the Chinese language family has multiple dialects and variants, and there are multiple writing systems.
The current practice is really a dimension-reduction, where a N×2 matrix [(CN, TW, HK, SG, MY...) × (simplified, traditional)] collapsed into a 2×1 vector (TW, CN).
The language labels should have been `zh-hans` and `zh-hant` if one means to differentiate the writing systems, and not the underlying linguistic variants.
According to Beijing government, Taiwan absolutely exists, it's just that it is not a sovereign state. Hong Kong has been handed over to China for more than twenty years and people in Hong Kong continue to use zh-HK – Beijing government seems to be OK with that.
Right. Desktop software is also much more difficult to QA than server software because of the lack of control over the environment and the difficulty of rollback.
Also, for desktop software, I have found that small releases are a lot less risky.
You experience small issues, one by one, instead of a massive amount of bugs coming all of a sudden.
For example, last time I updated a Linux distribution with 6-month releases this was the case. Whereas with a rolling release one, it's just much more pleasant.
They must already be doing a full build, test, and benchmark for each platform for each PR to their codebase in CI, presumably in parallel to make it run reasonably fast. It doesn’t seem like much of an extra step to release those binaries they’re already building hundreds of times a day.
If even Microsoft can't catch these, why should Mozilla be able to? I am admittedly not in the CI business for frontends but I'd imagine that the combinatorics vastly outnumbers the possibilities of any organization.
To begin with, it is probably released with a subset of the CI matrix, because I imagine that the full CI matrix takes more than a day.
Second, it is nearly impossible* to cover all possible environments+addons+websites and test them correctly. This isn't a normal desktop app, nor a webapp that always is running relatively sandboxed in a more or less known environment. Browsers are platforms, as OS are. Lots of people doing wacky stuff on it. Very hard to realistically automate all that stuff.
*impossible with a limited constraint of time and/or money.
We do exactly this for people who want. A reasonably stable build (including full testing, and benchmarking, which takes multiple hundreds hours, parallelized of course) is available twice a day (0:00 and 12:00 UTC). It self updates (either differential or full download depending of what landed on the past day).
I've been using only this for the last 10 years or so (on windows, Linux, Mac and Android), and I can count the number of time things really broke on one hand.
Release notes at available at https://www.mozilla.org/en-US/firefox/71.0a1/releasenotes/, but they are not always updated for each version, sometimes batched (its quite some work due to the amount of commits that are pushed per day).
All this is really experimental and probably not for everybody, but helps us to deliver something good at the end. Filing bugs and giving us feedback is of course particularly appreciated, as always!
Are nightlies tested on CI for all platforms that Firefox supports ? I only see Tier-1 in that matrix (Linux, Windows, Mac). No iOS, Android, *BSDs, etc.
They build downloadable binaries every 12 hours, called Firefox nightly, which is to say the bottleneck for full releases isn't building or automated testing. Mind you, the nightlies are experimental, and not for general use.
Doing that for a binary release that is in dozens of platforms, configurations, locales, etc. is incredibly time consuming. Uptake on binaries is also usually on the order of days-weeks before most of your user base is up to date. It's easier for say a website or service that can deploy to millions of users within minutes.
Not part of the talk: if you have more than 3 people involved you probably need a release schedule in terms of either scope or time. I'd personally advise you to have one even if you only have 1 person, just for a mental target if nothing else.
The talk goes into studies of projects which did scope based releases and got horrible results in practice. All the ones in the talk switched to time based releases and were much happier. Also, in a software project, you can only fix either scope or time, but not both. Also, if you want to ship in a timely manner, you probably want to fix one of them, not let both float.
I feel like it might help with motivation. It's easy to push things "to tomorrow" if you don't have a hard cap on how many tomorrows are available before the feature is supposed to go out.
> If people demand a certain new feature ("[this will] bring you new features more quickly"), it can just be brought out as it's ready, instead of having to wait an average of 3 (now 2) weeks before it can be released?
It's kind of like forming a habit. Having a set schedule could in theory delay quick features, but it will also keep the motivation going for time-consuming features.
> And why does this have to increment the major version, since when is every single update backwards incompatible?
I actually really like this personally. Encoding compatibility in a version number is, IMO, useless— it's sometimes just hard to know if your changes will break something or not, so there's a wealth of minor versions out there that break things anyway.
Just using a single number just makes everything simpler. Detecting breaking changes should be done with systems (tests, static analysis, etc) and not by trusting a number some developer came up with.
For quite a few products, releasing takes time. QA, auditing the security, localizing, generating the builds, executing exhaustive in-depth unit tests, flushing caches, writing release notes, publicating on different websites... it can take a while!
When the cost of the release goes above a certain amount, due the sheer number of features or the thoroughness of the release process, it starts to make sense to mutualize release tasks between a bunch of features.
Bundling those features on a time basis is quite often a local optimum between an acceptable delay for the product team and an acceptable cost for ops/release/qa/...
They're clearly moving gradually in this direction, but I think it's an open question how far they can get.
For server-side software, it's easy enough to do multiple releases per day, because you have full control over the environment, and all the work is internal. But for desktop/mobile software like this, having multiple releases per day won't be efficient in terms of bandwidth, load, or user experience.
And each browser version has other external costs, like documentation and support. Right now it can be hard to solve a problem like, "How do I do X with Firefox" because the answer varies by version. The more versions you have, the harder that is.
Or can you imagine trying to triage and verify bug reports if there are, say, 3 new Firefox releases per day, and people could be on any one of dozens of versions because they don't quit and restart the browser very often? We can imagine solutions to these problems, of course. But I think getting a release cadence below weekly for desktop apps will be a very long road.
One issue with releasing every day or every hour is that that would mean users downloading massive numbers of updates. Sure, you ship deltas, but there's still a cost to each update.
I don’t really like these quick release cycles. When things get released every year or half year I have the time to read about the changes but if I work with several packages that release all the time I don’t even have the time to read about the changes.
I also think that quality tends to suffer. Over the last two years windows 10 has released several buggy updates that caused our in house apps to stop working.
Firefox ESR exists for exactly this reason. It gets periodic security fixes but not much else and you can stay on the same major version for a year or so. https://www.mozilla.org/firefox/enterprise/
Quality can be worse in a long-time cycle project:
- Engineers are motivated to slam their feature in, because if they miss the train the next one's not for 12 months.
- You get one moment per year to connect with your customers & understand how well/not well your changes worked. This means either riskier things happen or that innovation slows to a crawl.
My 2 cents, speaking from some experience working on long time cycle projects and shorter cycle projects.
Firefox has been on a 6 weeks cycle for a few years now, clearly they found that short release suited them, and that it was if anything too long. Different strokes and all that.
It's not like features go straight from master -> release in 4 weeks. Changes have to go through developer-edition and beta channel first before landing in the release channel.
Why cycles at all? Why not look at what features have been integrated, whether they make up a set that you want to release, and then release?
Neither long cycles nor short cycles make any sense. Some features take a long time to develop, some take a short time to develop. Sometimes features that take a long time to develop aren't user-facing enough to be worth releasing for, and sometimes a quick fix has a huge impact that's worth a release, like a patch for a vulnerability that's actively being exploited by a spreading malware. Features simply don't line up with a single length of time. The problem isn't long or short cycles, it's cycles.
Is "releasing when it's ready" basically what was done in the past for e.g. CD-distributed software?
I imagine that could work well in some cases, but it also allows corporate bureaucracy and/or marketing teams to determine when things get released at larger scales and that might not be so ideal.
Because regular, predictable releases mean that developers know they can always "catch the next train", and users know they can plan around predictable upgrade schedules.
> Because regular, predictable releases mean that developers know they can always "catch the next train"
This is an argument for frequent releases, not regular, predictable releases.
> users know they can plan around predictable upgrade schedules.
I'm not sure this is actually how users plan upgrades.
The majority of individuals probably never turn off the auto-update flag. Planning doesn't enter the equation.
For organizations, my guess is that most organizations will try to build their upgrade process around security, but the reality will rarely be so clean. When I worked in IT we'd get computers into our shop that hadn't been updated. Period. We'd upgrade our provisioning images when there was a notable security patch, and besides that, we just would run updates on every machine every week at 2am Sunday night: that way it didn't interfere with users, but if something went wrong, we were on it with the full team first thing Monday morning. But if machines were turned off or whatever, they wouldn't run the updates. At no point did we ever even check the release schedule of a piece of software: the updates happened on our time, and theirs was irrelevant.
I didn't work in IT for very long, though, so someone with more IT experience should correct me if I'm wrong.
Generally speaking, you can release based on the calendar or based on whenever you think the feature set warrants it. You are advocating the latter which works well on low traffic projects. The former is a better idea on high traffic projects where there's always something worth shipping whether it's a new translation, a bug fix, or new feature.
It depends on the project, but in larger projects the calendar approach means politics takes a back seat as no one can hold back the release if their feature hasn't been merged yet due to blocking issues. And it helps keep the change-set small, and hence lower risk, if you release more often.
Also, Firefox already has a nightly release stream. I use the Aurora stream which releases a few times a week and have almost never had an issue with this frequency. I don't think a monthly release cycle is going to be an issue.
Notice that almost all the positives of quick release cycles are about how they make it easier for developers? I think is telling of where tech culture is these days.
IME frequent updates terrify non-technical users, any given update might change the UI, remove a feature the use or add some other complication (like simply not working) at an inopportune moment. Big fat updates give them a chance to prepare for these changes at a time of their choosing.
Fortunately I don't do family tech support anymore, but that's exactly the sort of thing that would result in a panicked phone call and a user having to "learn it all over again". And that's for a purely cosmetic change.
"We’re adjusting our cadence to increase our agility, and bring you new features more quickly. "
Does having faster release cycle really means having new features more quickly? The original 6-8 Weeks are already pretty damn fast. So I this pcs as marketing to general new site rather than technical.
Firefox, ( And Google Chrome ) already has Alpha and Beta Channels with fairly large number of audience testing it in real world. And features requires time to think, design, bake, tested in real world and refine, having a few more releases in between those process doesn't make the features come out any quicker.
Under a 6-8 week release cycle, it takes 12 to 16 weeks for code to go from Nightly to Release assuming it rides the train and is not uplifted to Beta. That is not fast.
Furthermore, the audiences for each of these channels are different, with Nightly being much more tech centric early adopters. While you do some testing in the prerelease channels, you main experimentation is going to have to wait until the code hits release.
Yeah, you are missing something. I'll help you though and explain. =)
So, let's say a feature takes 7 weeks to properly build and test. You start on week 1.
Now, after 6 weeks, a release goes out. A week later, your feature is done. You now have to wait 5 weeks before your release goes out. So, you merge your feature and move on to your next feature. 5 weeks later, your features is released.
So, to release a feature that takes 7 weeks, you had to wait 12 weeks.
Now, with a 4 week release schedule, you only have to wait 1 week.
This is a bit simple, but it really works this way. Now, here is the other end of the spectrum.
Let's say the release schedule is once every 12 weeks. (Under the idea that quicker is worse and slower is better). Now, let's say at week 6, you start your 7 week feature. It will get done on week 13, and have to wait 11 weeks to get released. That's a long time! So what happens? Well, what tends to happen is people push to get it done in 6 weeks rather than 7. That means skipping steps, working longer hours, and cutting corners. Why? Because they don't want to finish with something and have to wait 11 weeks for it to get released. They want to get it out there.
However, if you know that the longest you have to wait is 4 weeks, that's a lot more palatable.
> And features requires time to think, design, bake, tested in real world and refine,
Having more frequent releases doesn't change that.
> having a few more releases in between those process doesn't make the features come out any quicker.
tl;dr: They don't make finishing the feature faster. But they do allow finished features to get released faster. =)
Wrong decision. Nobody wants to update their browser this frequently, and the release overhead is a constant factor. So you're basically having a lot less productive time where you actually produce value. But hey, not my call I guess.
Absolutely nobody? I update all the packages on my Linux machines (personal ones, not servers) at least daily. I'm happy to get any changes as soon as they're released. Obviously I'm likely in the minority here, but is it really that inconceivable that a decent number of people might be fine with updating their browser once a month? It's a pretty quick process, and for the average user who isn't using many extensions (if any), things don't really break when it updates.
>Nobody wants to update their browser this frequently
hi, I run nightly and get prompts that my browser has updated daily. I simply ignore them, and it'll be patched by the time I start up firefox the next time, because it updates in the background. And so does regular firefox.
You don't see Chrome updating, and yet they're pushing out more releases than firefox ever did.
I almost never restart Firefox, and having a pop-up nagging me to update sucks a lot. Sometimes I have to look at it for many weeks straight before Firefox eventually eats all the RAM and I have to restart it.
The thing that scares me about this change is that it seems like the only reason they are doing it is "we’ve had many requests to take features to market sooner.". Where as the rest of the blog entry is to assuage our fears and otherwise placate everyone that things won't explode.
I don't know if 4 weeks is or isn't the right answer (after all some SaaS companies update dozens of times a day), but I know that I tend to leave my browser windows open until it crashes or the computer reboots. While I don't regularly use Firefox right now, I can imagine that I could skip entire versions with this behavior.
On firefox I get a thing that says "we need to update really quick, sorry!" You click "ok" and the browser restarts and reloads everything. Seems to be a pretty good solution to the problem you describe.
Oh no, not this. Every new release means hours spent on rebasing patches, investigating why the new version won't compile and then create more patches. When Firefox finally builds, I now have to find out what they have removed/changed that worked fine before, and what new crap that should be disabled in about:conf. I need LESS of this, not more..
This is great news. I recently started using Firefox (out of principle mainly). I find it pretty unstable, and whatever needs to be done to grease the bug-fix chutes - so be it!
reply