The double standard when it comes to Python is mindblowing. No other language could have gotten away with such ridiculous fragmentation and perf regression for such a long period of time.
Perl 6 is a completely new language, but that doesn't mean it's incompatible. Inline::Perl5 provides a rather extensive compatibility layer with Perl 5: https://github.com/niner/Inline-Perl5
The Perl 6 Inline::Perl5 automatically maps Perl 5's syntax, object system semantics, exceptions, etc. to Perl 6's so that Perl 6 users don't in principle have to know about Perl 5, just the logical API of any used module/library/function.
They didnt really "get away with it". But as far as Oracle, for example,doing something similar -- yes, open source languages get more of a pass.
Being an open community effort gives you a free-pass to make mistakes because you're methodologically sincere. Decisions are made by, and on behalf of, those who use the systems.
Having proprietary interests means you're methodologically insincere: mostly, you sell decisions for the developer but they are mostly for the company.
The legacy support is a benefit for people who are still using it.
The lesson from both Perl and Python is clear: you cannot remove language features - if you do you're creating a new language which has to start from zero in adoption.
Perl 6 removes very few language features. About the only notable ones I can think of is fork and format. Since Perl 6 has built-in multithreading, fork is not needed. And hardly anyone uses format anymore (it is great for directly printing to a line printer).
What Perl 6 did is change the syntax for various features of Perl 5; while adding a lot of new features.
Not to mention the loss of tuple unpacking in function signatures. Even if i felt morally wrong about it, for the last few years I've been writing python2 with a few `from __future__`s at the top since it was fastest and most featureful and had all the libraries.
I'm not sure that tuple unpacking in function signatures is any great loss. It's a feature I hardly ever saw used and if you do need to unpack an argument then you can do so in the first line of the function's code.
I saw it used a lot and it caused hours of rewrites to get libraries python3 ready. It's one of those things where I wonder why they didn't just depreciate it for another few iterations, allowing scripts to be changed over time.
Any code with a lot of assumptions about text encoding or integer division is going to be terribly broken after 2to3. It very much does require manual intervention unless you're doing very simple scripts.
Sure but the 2to3 tool allows you to run the tuple_params fixer on its own. Text encoding and integer division are separate issues and have nothing to do with unpacking tuples in function arguments.
My point in this conversation has been that this feature doesn't require manual rewrites. And I'd add that in any case the deprecation period for python 2 will have been 12 years once Python 2 reaches EOL. That's plenty of time to update a code base.
Double standards by whom?? Every other article or comment I’ve read since basically py3+1y has been incredibly negative about it. The only positive news about Python, recently, seems from people who were not around since the py2 days. Data science is where it’s at now, but it used to be massive and ubiquitous. Golang came at a time where public perception of Python was down the drain, and ate a big chunk of its lunch.
I've been around since ~1.5 and have no idea what you are talking about. I suspect our perception is very much coloured by where and whom you are hanging out with.
Agree that there was never a shortage of negative articles about py2/py3 transition. Fairly certain Golang success was down to its advantages more than Python's perception who's use has grown through all this time.
Pretty sure that's not true. Golang might have taken some from Python web frameworks or command line tools, but that's about it. Python's popularity in scientific computing has only grown over time, and that encompasses a lot of fields and users, not just data science.
My interpretation of the usage share is that Golang ate a bunch of lunch that would've otherwise gone to Python. That is, people moving from other languages (Perl, PhP) to something else for various reasons.
Absent Go, I suspect many current Gophers would've landed on Python. So in a sense, Python lost people to Go. But I've never seen anything to indicate that there was a large migration from Python to Go, and certainly not because of 2 to 3 or public perception--whatever that means in the context of a programming language.
Python 2 is still the mainstay for Data Scientists. I've seen more resistance to moving to 3 there than almost anywhere. Which is pretty funny because it's the easiest code to port assuming your data science libs support 3 (and all the big ones do.)
I think it's more accurate to say that Go ended up with some of Python's potential mindshare, not that it stole a significant amount of existing mindshare.
Wasn't there a big brouhaha over the change from VB6 to VB.Net? The big difference there is that if it's ordained from your corporate overlords, you have little choice, whereas you feel like you've got a bigger right to complain when it's "just" coming from the BDFL.
Calling it a “New language” flatters Vb.net. In reality, it was a “marketing compatibility shim” on top of C# to help VB developers acclimatise to the CLR.
What do you mean by double standard? Everyone in the community, including the core developers, acknowledge that the 2 to 3 story was full of mistakes. They have promised not to make it again. And they took some actions to remedy the problems, and that's why Python 3 is finally succeeding.
The double standard is that people still stick to Python despite those problems. If we're being honest and all conspiracy aside, imagine the same story for Ruby (a very similar language in terms of capability, features and performance), everyone would be laughing at them and the language would be pulling a Perl right now.
That massive chip on your shoulder is weighing you down.
If you really think something like that is happening (big if - I’ve gone through so many py3 flames...), the only answer is: because people love the language SO DAMN MUCH that they will overlook its warts. And the next logical step would be to find out how an independent research project, staffed mostly by volunteers and without a marketing budget of any sort (and not running in every browser nor shipping preinstalled on the most popular operating system), ended up with this sort of mindshare.
It also was in a minor version change 1.8 -> 1.9, but people moved on because 1.8 was declared EOL this is what Python was missing. Python 3 is 10 years old and 2.7 is still supported (bufixes, but still).
If you look at timing, Python 3 picked up for new projects in 2015, when PSF declared that no new features will be backported to Python 2 from 3.
I believe the industry will wait for 2020 to start migrating existing code after Python 2.7 will be declared EOL.
Those three scripting languages actually an interesting comparison.
Perl 5 -> 6 broke all your code in exchange for a whole new language. Some people moved to Perl 6, some to other languages, and a lot stayed with Perl (5), which still runs programs written in Perl 1.
Ruby 1.8 -> 1.9 broke most of your code in exchange for minor cleanups. But Ruby was Rails, so the culture accepted constant rewriting and breakage, and people went along.
Python 2 -> 3 also broke most of your code in exchange for minor cleanups. Users expected some stability, but the language's culture demanded conformity. The result has been a decade-long war in which Python fans try to shame Python users into moving to version 3.
> But Ruby was Rails, so the culture accepted constant rewriting and breakage, and people went along.
The main reason people switched probably wasn't rails making people expect breakage, the main reason was that (a) it was relatively easy to rewrite your code to work on both 1.8 and 1.9 (as long as your dependencies had already done it), and (b) 1.9 was much faster.
Did not watch the presentation but IMO the primary mistake was not declaring EOL for 2.x sooner. Other languages had similarly breaking changes and they simply gave short time before legacy version was declared EOL and developers ported their applications to the new version.
In Python it took 10 years, and even now you can't get companies to migrate, because why make developers do the work when application still can be used. Many places will probably start the migration in 2020, which shows that giving 10 years head start was a mistake and all it did was create FUD and division in the community.
The Python 3 immediately picked up for new projects after announcement that 2.7 will no longer have new feature and only bugfixes in 2015. Industry will wait for the last moment when all support for 2.7 will be ceased in 2020.
One mistake was providing 2to3 as a migration path. This required projects to move completely from 2 to 3, for libraries it meant that they had to either stop supporting 2 (which were the majority of their users) or forking and maintaining both versions. The remedy was to make it possible for libraries to be both 2.7 and 3.x compatible, using tools such as six.py and making some changes to 3.x to make it more compatible with 2 (e.g. adding u'' strings).
The problem with six is that once you decide to use it your entire code base must be written with the lowest common denominator in mind, in this case py2.7. You can't use six.py at the interface boundaries and nice python3 features inside your implementation. Six is only useful for old 2.7 libraries that want to quickly be 3.x compatible and still maintain their 2.7 compatibility, not if you have an old 2.7 application and want to incrementally add 3.x features.
The proper way make a major breaking change like python did is to provide a bridge between the two versions, so you could import 2.7 modules from 3.x applications, and even the other way around. As of today there is still no such thing.
I think it's the opposite. When other languages make incompatible changes, they have serious problems (e.g perl6).
Previous versions of python were back-compatible. Java versions stress back-compatibility. I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.
Python has done everything possible to avoid this fate, by maintaining 2.7. And I'm not even convinced python has gotten away with yet (e.g. kalite, offline khan academy, needs 2.7). RHEL may end up reversing this. Give it another ten years!
Amd the Enterprize is the most back-compatible thing you've ever seen. This change is going to break lots of code, and they will reverse or lose customers.
> When other languages make incompatible changes, they have serious problems (e.g perl6).
Counter-example: The ruby 1.8 -> 1.9 -> 2.0 upgrade path. 1.9 introduced the whole encoding system, which was a major update at the time. It did break a lot of libraries that handled strings. Still, the change went over fairly quickly, partly because major players in the ruby world (rails) adopted the change. The update also brought clear benefits and it was clearly marked that 1.8 was going to go out of support. There are still some stragglers (centos 6, I'm looking in your direction), but it's clearly accepted that ruby 2.x is the standard that you should be writing code for.
You reminded me of the java version incompatibility nightmares that once littered the enterprise landscape. If they do that now, it's because their history includes perhaps the worst set of incompatibility jumps of any popular language.
> I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.
Ruby qualifies, I'd say. From 1.8 -> 1.9 there were syntax changes, changes to how strings worked and so on. Code written for 1.8 would not in general work on 1.9 and the other way around. While it was possible (sometimes with some extra if statements) to write code that worked on both versions I would definitely call it an incompatible, code-breaking upgrade.
The fact that it was possible to write code that worked with both wasn't just so critical, though. It allowed you to upgrade your code incrementally to support 1.9 while doing regression testing on 1.8, and then come up with a deprecation schedule for 1.8.
Uhhhh... sort of? I mean, I could go through all of the needless "why the hell did they make this partner incompatible also" stuff like, like removing u'' support (thai was added later), or all of the ludicrous breakage in the underlying APIs (like filesystem calls that made assumptions about character encoding that make no sense for the underlying Unix model and literally caused files to just disappear from iteration)... a lot of this stuff was fixed in later versions of Python 3, but that was years later, making it nearly impossible to deal with the existing land mines of people who had installed Python 3.0-3 (or even 3.4).
But what makes me shocked that you think that that is the case is basic basic stuff... I mean, as one obvious example, they changed the exception syntax. So, "uhhhh.... sure?" it was possible to make a program that ran on both 2 and 3, but it required using "try" with no variable assignment and then digging into Python's exception reflection support to recover the variable. And you had to do this every single time. (And of course a lot of the time the underlying core library would now throw different exceptions, making the situation all the more ludicrous.)
I had code that straddles the 1.8/1.9 boundary of Ruby, and even though I wasn't an expert at Ruby it was trivial to have code that worked for both (and I mean 100% correct: Unicode and everything; remember that neither Ruby 1.9 nor Python 3 added Unicode support, they only made Unicode support easier to obtain. for Python 2 there was already comprehensive, if non-default, support, and for Ruby 1.8 you just needed to be careful about where you did conversions).
it was possible to make a program that ran on both 2 and 3, but it required using "try" with no variable assignment and then digging into Python's exception reflection support to recover the variable.
Python 3.0 came out in 2008 and 3.3 came out in 2012. Are you really saying that nobody used Python3 for the first 4 years of its existence? I was certainly using 3.1 and 3.2 back then.
Blender was using py3k for quite a while before 3.3. I was doing blender dev work back then and can recall the almost immediate upgrades as soon as the newest python release came out -- kind of painful really since I prefered to use the fedora installed version instead of building from scratch so would often stall my dev work until fedora caught up to blender.
They jumped on so early the python devs were saying "WTF are you people thinking?!?"
Given blender is mostly c bindings and a huge code base, it's kinda impressive, and a good example that it's not an impossible task to migrate even a very hard project, let alone the most common ones.
It wasn't a migration as much as a bottom-up rewrite of the entire bpy and big chunks of blender itself. Most of the bindings are generated by makesrna at compile (compile compile?) time which is fairly simple once you go deep diving into the sources.
It actually takes a fair bit of shenanigans to get hand-coded bindings into blender, my last(?) patch was bindings for the kdtree and I had to do some convincing to get that accepted. I'd guess that 99% of bpy is generated bindings.
If they stuck with python 2 when they started the initial overhaul they'd probably still be using it since artists are very vocal when their scripts break.
Django never had support for python < 3.3 and numpy and scipy only supported python 3.2 and up. The story is similar for most other packages. So if you where using python with any of its major libraries then python 3 was pretty much useless for you the first 3-4 years.
It's notable that 1.9 came with a significant performance boost, so there was a good practical reason to want to upgrade. People were motivated by the prospect of their code running twice as fast - sure, the language was improved a bit too, but it's Ruby, it's already pretty nice, so that was hardly going to light a fire under anyone.
When Python 3 turned up and it was on average slower, my first thought was "well, good luck getting people to adopt that".
> I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.
Swift breaks every major version at least but not as often and on a large scale anymore with point releases as with 1.x and 2.x. Has been like that from the beginning and everybody that writes it accepted it.
I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.
Of course. Swift 3 is one example. Ruby 1.9, as others have pointed out, is another.
PHP is yet another, and perhaps more relevant because it is close to contemperaneous. It had its "Python 3 moment" with the migration to PHP 5. Lots of BC breakages.[0] That transition took about 3-4 years for most.[1] Most users are now on the current major version, 7, and the remainder on 5 are mostly using its latest release (EOL is EOY). No one uses 4.
Swift 3 is compiled. So you can provide the binaries and be ok.
Ruby 1.9 is a great example of why you should not be too nice: they told everybody "you have a fews months, deal with it". The community moved. Python said "poor things, we understand, take those tools and years to do the thing", and the community cried, and did nothing.
PHP literally failed. They canceled V6 and jumped to V7.
The funniest part ?
None of those languages are even close to Python popularity.
Swift is not even used for most Apple codebase. Python supported even Atari.
Ruby and PHP has almost no use case outside of the web. Python is used by OS, on the web, in GIS, by data analysts, for CGI, in AI, for sysadmin, to make GUI, video games...
And Python is much, MUCH older. 1990. 4/5 years older than Ruby/PHP. It has way more technical debt to pay. Swift ? 2014
Yeah the migration was badly handled from some aspects. But honestly, given the challenge and track record of the competition, it's not too shabby.
Ruby 1.9 was working mostly the same as 1.8 while provising 2-3x speed improvment, and new exciting features.
Python 3 breaks even the basic Python 2 hello world, everything was notably slower, and not anything new and exiciting feature-wise apart ubified string support.
Swift is Apple’s baby and has a captive audience. Since Apple cannot even update their example code, I don’t think it proves any points about keeping up with a moving target.
Maybe I'm wrong, but I have the impression that Ruby is now an expert's language, and much less accessible to beginners than it used to be. And, while it remains useful and used, it's not growing...
When trying to learn it, I encountered incompatible examples and tutorials, complex and changing ways to update it... and then my toy code stopped working. I eventually found the cause of the code-breaking change, fixed it... thought a bit, and decided against ruby. That's just me though.
Python had deep problems that could simply not be fixed without BC breaks.
The decision was made, and in hindsight it was the correct decision by Guido and other core devs.
Look at PHP, and look what a total cluster of madness the language has become.
I think Python 3 was not brave enough, to be honest, to future proof Python:
* an optional type system without hacks was not introduced (types in comments, really?)
* the GIL was not removed or at least a solid parallelism story was not included, one that would allow Python to use all cores on a system while sharing, if needed, memory
* no true performance improving changes were made; by this I mean stuff that improve performance by an order of magnitude; Python is still way too dynamic and this basically sunk Google's Unladen Swallow and Dropbox's Pyston, at least in terms of worldwide adoption
I say this because I'm a bit worried that long term Python has locked itself into a corner where it will keep getting pushed into as the other languages develop a better developer UX. Python was a bit lucky to catch the ML train but who knows how long this will last. And Go, Kotlin, Typescript, Swift, Julia, Rust even are improving their ergonomics and edging towards Python's core competency.
Ugh that's because PEP 484 (which introduced 'proper' type hints) only introduced it for parameters and return types and implemented in Python 3.5. Type hinting bare variables was introduced with PEP 526 and implemented in Python 3.6.
While I really like the concept of PEPs, and I think keeping original (like as they were written) PEPs around are the right thing to do, I do wish sometimes that Python did a better job at incorporating changes from PEPs into the core documentation once implemented.
I'd argue that the fact that it was explicitly a design goal to be a teaching language meant that it saw wide-scale adoption by universities, which made it more likely that someone writing an adaptor library to old linear algebra packages would write it in python.
Perl6 was brave enough. It became a totally different language and lost much of its user base who moved on to Python, Ruby and to a lesser extent Node.
From what I saw, Python3 doesn't have type hints but it at least warns when you try to do stuff which involves magic, such as trying to increment a float.
Perl6 doesn't warn in those cases. If you use type inferrence it just happily goes ahead and uses the equivalent of perl5 BigNum which makes the code run 50x slower, instead of upgrading to BigNum when int overflows like Ruby does. If you use ugly C-style loops instead of nicer native for loops, your loop will run 2x faster. Too much magic or the wrong sort of magic can be really annoying sometimes.
> If you use ugly C-style loops instead of nicer native for loops, your loop will run 2x faster.
This is what I currently consider a workaround. At the moment if you have a constant endpoint and it fits in a native int, such as `for ^10`, it is already highly optimized and actually runs faster than the equivalent in Perl 5. That is not done yet where the endpoint is a native int variable. :-(
the GIL was not removed or at least a solid parallelism story was not included
Not for lack of trying. People have been trying to do this since at least python 1.4. They just haven't found a way of doing it without negatively affecting single threaded performance, which Guido considers an unacceptable trade-off. http://pyparallel.org is probably the most interesting latest experiment in this field, but it never got cross platform support and seems to have stalled out.
They did break all C extensions in py3k -- not so bad you can't work around it with some preprocessor magic but break it they did.
And they continued to break them within the py3k lifetime, mostly unicode related stuff from what I can remember. Probably still break stuff but I haven't had time for my toy python projects in a while so haven't been messing with the C-API lately.
The C api isn't the blocker for removing the GIL. The blocker is that Python is a dynamic language and there is no good way to know if another thread has just changed the underlying function on an object or not without having fine grained locking.
Fine grained locking works, but is expensive for non-multithreaded applications and slows down Python. It would speed up multithreaded applications though.
"I say this because I'm a bit worried that long term Python has locked itself into a corner where it will keep getting pushed into as the other languages develop a better developer UX."
You can stop worrying. That is absolutely what will happen.
But it is a good thing. Languages should be something, not merely an accumulation of every fad and trend that appeared over its lifetime. That means eventually they will mature, and eventually hit a peak, and then fade. But during that process, at least you'll have a nice, mature language, instead of one always chasing the latest pretty-shiny, immature, confusing, crammed with too many features, with libraries constantly jerking around trying to keep up and always just a bit broken. Indeed, I kind of thing Python has already done a bit too much of the latter, and were I in charge of Python I'd give some serious attention to the idea of simply freezing the core language.
Python is what it is. Let it be what it is, because what it is is pretty good. It's by far my favorite language of the genre it is in. It's got way too much baggage to compete in the next arena, so let it be the master of the one it is in instead.
> [Languages] will mature, and eventually hit a peak, and then fade.
"Maintenance mode" should be a badge of honor: you have created something that is a stable foundation for new things. For example, how many programming languages are implemented in anything other than C and/or themselves? Not a lot, and that's a huge compliment to C. Despite its faults, you can probably count on your C program working after you're dead.
> I think Python 3 was not brave enough, to be honest
Agree 100%. I would have added:
* fix the C API and pass the interpreter instance as the first parameter to all functions as is common in other interpreted languages, rather than relying on a global state.
There's Fabric3, MySQL libs that work with Python3 (and Django), futures are a backport of a feature from Python3, uWSGI and BeautifulSoup might be worrying but the site shows mostly green
At this point if the library hasn't been ported to Py3, someone else would have done it for you or abandoned it.
Ansible works just fine on python3 for both local and remote actions. You need to explicitly tell it to use python3 at the moment, but this will go away eventually.
There are a couple modules with options that break in a Python 3 environment, but that number is I think in the single digits now. I’m very close to switching all my Ansible masters to 3 finally.
BeautifulSoup4 is Python3 compatible. It was released under a new name.
uWSGI has Python3 support as a module.
I have been using both on Python3 for many years. I don't think there is any python module left which does not have Python3 support or a more modern, better alternative.
Basically once 3.7 is out (which is soon), CPython3 will finally be broadly faster than CPython2.7.
Interpreter startup time is still annoying though. I really wonder what the right strategy for addressing that is. Like honestly 2.7 is faster than 3.x, but 2.7 CLI tools still feel silly to use.
If you are doing raw number crunching you shouldn't be using python in the first place, the whole idea of python is that it makes it easier to write better algorithms lowering your big-O, or use python for the lightweight orchestration and call into native processing libs or external processes such as numpy for the heavy lifting. Same goes for distributed computing which is becoming much more popular; asyncio, await, and other improvements from py3 makes it easier to write fast and robust distributed code, improving your overall application performance.
On the one hand... This is 3 years after the first time I saw a Gentoo python3 only environment.
On the other Python2 support will go away when the community is done with Python2. And that process goes a lot slower than I think most people appreciate. Still, a big vendor changing their defaults helps a good deal with the push.
> most popular packages are now compatible with Python 3
I often see this but I think it's a perception from the Internet/web world.
I work for CGI, all (I'm not kidding) our softwares (we have many) are 2.7. You will never see them used "on the web/Internet/forum/network" place but the day-to-day job of millions of peoples in the industry is 2.7.
And we are a tiny focused industry. So I'm sure there is many other industries like us which are 2.7 that you will never heard of.
That's why "most popular" mean nothing once you take how Python is use as a whole. We don't use any of this web/Internet/network "popular" packages.
I'm not saying Python shouldn't move on. I'm just trying to argue against this "most popular packages" while millions of us, even if you don't know it, us none of those.
If you arent using the packages that most people use, and decided to roll your own, then it is on you to upgrade. Not seeing the issue here of course your custom niche libraries havent been upgraded if you havent been diligent.
The py2 end of life is supposed to give your manager hard reasons to switch. But if you never use pypi packages and dont interact with the web, you dont need to upgrade, as you don't need to install packages that might be 3 only and you dont have potential for security vulnerabilities.
Well, the CGI industry had money, competent people, and 10 years to upgrade. An entire decade. And a LOT of tooling and documentation to help.
I made a lot of code conversion from 2 to 3. Most of them took me a couple hours to a few days.
I'm currently working on a 2.7 project that will never migrate because they literally patched the cpython runtime, but you can't freeze a whole community because some will take bad technical decisions or accumulate tons of technical debt.
Now, Python 2.7 will not stops working after 2020. It's just that we, as a community, will stop to pay the price for the ones that didn't move. If you want to stay there, you'll pay a commercial actor for it.
In 2020 centos 6 will stop received updates. Will you go complain that the new RPM are not compatible ? No, either you update, or you pay commercial support.
Ruby and node did it in a few months and told the community to move or die. Nobody complained.
Perl took so long the language has been forgotten.
PHP literally canceled the version 6, and made it taboo.
We gave years, and means, then extended the deadline, and heard nothing but complaining since.
I won't pretend the migration was masterfully executed. The whole 2/3 debacle was painful.
You could argue that 2.y is also a different language than 2.x but this is besides the point: as long as there is a compiler for that language and it works (for your definition of works) you don't need to translate your programs in another language just because you have money.
You don't need to upgrade your servers to the lastest linux kernel either. But in 10 years, you won't get any security update for it, unless you pay a lot of money for it.
It will be the same for Python.
You want the excellent and money making free work of volunteers ? Do your part.
You don't want too ? It's ok. In 2020, plenty of companies will sale you services for the real market price of your technical debt.
Is now 'ignoring security' the free pass that is thrown into every argument against opposing positions?
I'm being facetious about that; but you know that security risks can be reasoned and mitigated with different means (sometimes less costly) than simply upgrading software.
I don't think that centos 6 is bug free yet it EOL in 2010 too.
Actually i'm using it right now and I'm positive it has at least one bug when running in virtualbox.
My client won't upgrade. They pay support, expensive support, to keep their old version.
In 2020 I'll open a shop to convert old Python code bases or fix bugs in them. I'll charge 4 times market rate. For me it's a net win that people don't migrate.
Sometimes I wonder if the opensource movement, with its perennial “update anxiety”, is actually busy generating an industry of legacy maintenance. In a way, it’s a natural extension of the original “development is free but you pay for support” idea, but I don’t think anyone openly elaborated it into a long-term revenue-generating strategy. It looks like a slam-dunk, to be honest, with the only caveat being that work is extremely unfashionable.
It's not the opensource movement. It's IT in general. A lot of software stopped working with the windows 10 update that was forced on the users.
Android breaks the API regularly by changing the permission game.
The PSF has limited resources (3 million of dollar of budget) to exists (this includes running pypi and organising pyconf), but I think it's track record is quite good, even comparing to commercial products.
Actually, Android doesn't break the API, even by changing the definition of permissions. If you have a breakage, investigate your app manifest, what API version you are targeting. Android frameworks shim the old behaviors for the apps that declare the use of the old API versions.
E.G: Since API Level 19, the Alarm system has been completely remade. One must check the API Level to act some how or some differently. The same is for permissions after the advent of Android OS 6.0+. But we still need the support library to give Fragments and the ActionBar to API Levels lower than 11. And VectorDrawables to API Levels lower than 23. And many more objects.
The shims are managed by targetSdkVersion, i.e. if you declare targetSdkVersion >= 23, you will get the new permission system (because it was introduced in 23) if running on [23,maxSdkVersion] device; if it is running on device that has API level [minSdkVersion,23), you get the old one; if you declare it < 23, you will get the old one, always, on newer devices too.
The targetSdkVersion says what you designed against. If you claim supporting the new API, you should handle the new API (and that includes detecting what the underlying device supports). That does not preclude you from including the support library needed for lower API levels still supported.
Except if you release a new/updated app, it MUST target a recent (less than a year) targetSdkVersion.
"2019 onwards: Each year the targetSdkVersion requirement will advance. Within one year following each Android dessert release, new apps and app updates will need to target the corresponding API level or higher."
That's Play Store policy for new apps and app updates, not a technical limitation of the OS. Old apps will keep working just like they did before, the new APIs will not break them.
the old trick is to get your changes to the main branch so you dont need to maintain your it. that is also why anyone bothers to do so in the first place
Companies who cannot upgrade their codebases are companies who cannot maintain their codebases.
Eventually, the technical debt hurts your ability to deliver with both speed and safety. When your org can't do that, your org stops being competitive in the market, and if the market doesn't kill your company then the brain drain will.
(small and non-notable exceptions for the literal handful of types of orgs where this is not the case)
True, but is there a situation where upgrading will hurt instead of benefit?
Not every solution is just as good for everyone. Think of the delayed upgrade game that some people play. They wait for others to upgrade first to get rid of new bugs at their expense. Now take this game at a 10 year extreme.
For them, the new branch (e.g. new language etc.) is simply too risky. It's not that they are not smart to upgrade, they simply have different opinions on what is valuable than you do.
Yes, upgrading is an engineering expenditure like any other. If you devote manpower to maintaining your codebase you're not devoting it to user-facing features. As such, upgrading your codebase can hurt your ability to deliver some priority feature on-time.
But that kind of zero-sum thinking is myopic. My whole point is that if you only ever prioritize feature work, eventually you lose the ability to deliver features. Nobody in their right mind thinks you can have a company with zero technical debt, there's always a balancing act in play, but if your managers are just playing the risk card without, you know, doing an actual risk analysis which includes the risk of not being able to deliver future features, then your company isn't making smart decisions.
Now, there do exist codebases where you can make the logical conclusion that there really will not be any future feature work, but it's still running in production and so it will need support, and it's a fairly large codebase, and so there would be an enormous cost to re-certifying for the upgrade with very little apparent benefit. That happens, but it only (really, truly, only) happens in large enterprises with many, large codebases and only so many engineers. Those enterprises do have the resources to hire new employees and/or outside contractors to pay down the technical debt on these half-alive projects - they just aren't prioritizing those resources on the technical debt, even long after it became clear that the clock couldn't be stretched out any longer. Through painful personal experiences, I have zero sympathy for companies in that position who think that they can solve their problems with more firewalls.
There is cost the individual user of an open source project has to consider, and there is cost the community of that project as a whole has to consider.
So from your perspective, just keeping the old version might be "the right thing to do", and at the same time the decision of the community to not support it anymore is also "the right thing to do" from their perspective.
As the community does their work unpaid, it seems you have no right to impose your perspective on them, except if you pay money for the necessary work. Which you are free to do.
I guess what the parent posters point out is that this will usually shift your own cost/benefit ratio in a way that upgrading becomes "the right thing to do" for you, too.
>> It's just that we, as a community, will stop to pay the price for the ones that didn't move. If you want to stay there, you'll pay a commercial actor for it.
That's very well put. But I guess this can happen only with big, important, projects where you can afford to loose/upset some users...
Your games don't work in windows XP anymore. Actually, your USB3 mouse doesn't work on windows 7 out of the box.
Centos have LTS, but still EOL.
Ubuntu init system was changed to upstart. Then to systemd. Also gnome, then unity, then with new menu/notif/systray semantics, then back to gnome (but shell), and soon wayland. It breaks a lot of things.
Firefox new addons don't work with some addons from last year.
NodeJS had 3 incompatible forks in it's short life.
Twitter and Facebook API breaks every sunday.
JS frameworks are just madness.
Python break compat, ONCE.
Once since 1990.
Also gave 10 years to migrate.
In our industry, that's not bad at all.
And the community hold. We worked. We wrote tools, doc, blog posts. We were there all the way. We have incredible libs like python-future to help.
And if any of that is not enough, well, Anaconda Continuum will be happy to do business with you.
2.X would still be included as an optional package, correct? I couldn’t determine that from the link, but it seems like RedHat’s practice. Or, they could install it from source.
If so, 2.X users still have two years or so to migrate. That’s plenty of time in my opinion, even when all of your code is 2.X. There’s even a library for it.[1]
> 2.X would still be included as an optional package, correct?
I wouldn't bet on a RedHat provided 2.7 package. if they provide it, they will be stuck maintaining it, and their long term support contracts are very long term.
On the python side - From 2020-01-01 no commits will land on the 2.7 branch. RH is still supporting RHEL 4, which was released in 2005, and supported 3 from 2003 -> 2014, which (if they keep a similar timescale for RHEL 8) could cause them to try and maintain a working build / test / development system for py27 for an extra 10 years.
In fact I can play EarthSiege 2, a game from the 3.11/Win95 era, just fine on Win7 x64 with the only things not working being joystick input (I guess it does some shenanigans with the MIDI/joystick port in addition to using the windows joystick API) and the pause screen which shows your vehicle spinning around spins too fast (probably because its speed is tethered to CPU sppeed).
Microsoft takes, with the exception of drivers, a lot of effort to keep backwards compatibility. And that is why people like it, in contrast to Linux where "Things break" is the norm and even in OS X it's not unheard of. Oh, and also why big enterprises stay as far away as humanly possible from anything NodeJS or more modern than PHP and Java.
Whenever this comes up I feel it is worth pointing out that Linux's "Things break" approach is a problem exclusive to the userland tooling built up around the kernel. Linus takes a very hardline stance against breaking kernel ABI compatibility (except for drivers), but pretty much every piece of software outside of that, including GLIBC, thinks it's totally ok to break things all the time.
It's really very sad that the Linux community never adopted Linus's view.
Because they are entirely different things. Breaking the kernel ABI doesn't just break a few packages for a popular language, it breaks the entire ecosystem. More importantly there's literally no reason to break compatibility in the kernel. There are perfectly valid reasons to break compatibility in programming languages.
Yes and your Python 2.x code will still run after 2020. But going the other way - new programs might not run on older distro releases, which is what GP said.
Meanwhile I routinely use Lisp written before I was born (in 1986) that works out of the box in 2018. I’ve used Java packages (recently) from 1999. My company still uses python/pypy 2.7 and we see no compelling reason to upgrade. If there is any sort of upgrade, it will be off of python.
When, oh when will we ever finally unlearn "worse is better?" It's hard enough writing good code without having to fight your tooling (non-orthogonal, non-homoiconic, non-malleable, non-backwards-compatible languages, some with header files propagating changes upwards and breaking user code, and some with fascist type systems causing combinatorial explosions of containers and factories, ugh)
Python broke compatibility a lot more than once. Python 3.5 and Python 3.6 are not compatible, for example. There are even cases where code written for an older version of Python 3 will not work on a newer version of Python 3. For example, see PEP 479.
"And if any of that is not enough, well, Anaconda Continuum will be happy to do business with you."
Yes, this is a key point that seems to be forgotten. The Python development team will end their support on 1st January 2020, but there are other providers with Python expertise who will almost certainly pick up the commercial opportunity, e.g. ActiveState, and possibly even Red Hat with a separate product.
This is not the end of Python 2, it's going to be a transition to a different support model. Commercial users either pay to move up, pay to rewrite the Python code with something else, or pay for a super long-life version of Python 2.
The awkward part may be academic research, where there's a probably a lot of Python 2 code that has no maintenance budget. I would not be surprised if a project appears to build no-support versions of Python 2 with essential fixes after 2020.
OK, but then can the python community stop saying "Pretty much all of the packages are converted anyways!" if the answer is really "we don't care anymore if your aren't" ?
> Well, the CGI industry had money, competent people, and 10 years to upgrade. An entire decade. And a LOT of tooling and documentation to help.
I don’t think that’s a money issue. Python 3 upgrade is not really compelling. You get slower speed - at least until recently -, tests might pass on 3.x, but documentation and edge cases still are better on 2.x, etc. While nothing is really exciting in the 3.x branch.
>While nothing is really exciting in the 3.x branch.
Sorry, have you looked at Python 3 lately? I don't think I can sum up all of the amazing work that's been done in one post (async? Cleaned up stdlib? Better errors? Not having an aneurysm from text encoding issues unraveling your whole project? New splat syntax?).
I would really encourage you to check out what's happened in the last ten years. I think you'll find many more exciting developments than you think.
I think that he's kind of right about the problem.
In my opinion, Python's primary business value has been that sweet spot it occupies. It's not the most fun, but it's pleasant. It's not the fastest, but it's not too slow. It's not great on resources, but it's not too bad either.
Good people will work with it, and they tend to be the "let's not get creative, let's just get it done" folks that businesses love. Mediocre people do great with it, because it feels much nicer than many of the other things they've worked with and it channels them towards producing better code and being more productive.
Python makes it huge pain-in-the-ass to get weird/creative, and generally frowns upon it, so people tend not to as much. Junior and low-skill contributors can't really get in too much trouble as long as they stick to the program. It's an awesome "just sit the fuck down and do your work" language.
As a language, it's carved out a great space being pleasant, consistent, well-rounded, predictable, etc.. I know there are people on here using it to build their rocket ships. And it can go there too... maybe not as flexible or fun as more extreme alternatives, but it's also less likely to blow up in your face. For those software rocketeers (probably most any SV start-up), updating makes undeniable sense. Python 3 is hands-down better, and they can handle the transition no problem.
But a lot of people don't use Python to build rocket ships. They don't build rocket ships at all. They build delivery trucks and conveyor belts and coffee dispensers. And, for them, the very things that make Python a great choice - a very clean and conservative focus on simplicity and stability - are the very reasons not to switch. There's just not really anything that's been introduced in the last 10 years that is going to make any real difference for their use cases. Whatever trouble they've had with unicode and the other bs has long been lived with. Whatever's missing they've long lived without. And things have been just fine.
Python's killer feature - being a really nice and well-rounded option that works pretty damn good for a large swath of people - is sorta it's undoing here. It's hard to really be that too much more so.
I mean I switched to 3. It's clearly better. But if I was running a Python team that had been effectively just trying to get the job done in 2.7 for over a decade I would have to admit that I wanted to switch for my own sake - I think it would be unlikely to make them that much more happy or productive.
Hell, many people can't even set their same environment back up in a week after loosing their laptop.
> But if I was running a Python team that had been effectively just trying to get the job done in 2.7 for over a decade I would have to admit that I wanted to switch for my own sake - I think it would be unlikely to make them that much more happy or productive.
Part of being a software engineer is keeping your environment up to date. Sure, I could keep using Node 0.10. But I'll get no updates for security or fixes for bugs.
If you're on a team that hasn't updated to a version that's supported, then you're neglecting one of the fundamental ongoing maintenance tasks involved in software engineering. If that's "too hard" to do over the course of a decade, then perhaps there's something wrong with your engineering culture.
Sure, you don't have to do it. But don't expect the rest of the world to continue to support your old setup. If you have to compile Py2.7 from source because RHEL doesn't come with it, that's fair penance for not keeping up with the community. It's just about the most entitled thing in the world to say "I didn't spend the two weeks in ten years time to upgrade to a newer version [using the numerous automated tools] and I'm mad that the world at large isn't making it easy for me to continue to not do anything."
I'll just say this... I still have 2.7 as my base install... for Ansible. Because they haven't switched yet.
Ansible is owned by... Red Hat. They acquired it in late 2015.
Seems like Red Hat - the people in the post that we're talking about that are shoveling folks off 2.7 - has been neglecting one of the fundamental ongoing maintenance tasks involved in software engineering, and perhaps has something wrong with their engineering culture.
Sorry, this shit is just too funny.
And it seems that Red Hat is booting Ansible from the core repos as well (looks like it's in that depreciation notice!), presumably instead of spending the "two weeks" to update it (you might want to go ask the Ansible team why they're too damn lazy to spend that "two weeks", see what they say).
However, unlink Red Hat and Ansible, some organization depend on the softwares in question, and can't just sideline them 'cause the shit they've successfully run on for a decade-plus has lost it's blessing.
The 3.x branch has developers committed to improving it and the 2.x branch basically doesn't. You shouldn't bet your horse on stagnation in a technology industry.
> Ruby and node did it in a few months and told the community to move or die.
Uh, Ruby’s big compatibility breaking upgrade (1.8 -> 1.9) took more than a few months, it took years for the community to fully move. (Because of Ruby’s pre-2.0 versioning practice, 1.8.x -> 1.9.x was a major version upgrade, and it was actually a more significant one than the 1.9 -> 2.0 update. But lots of people stayed on 1.8 throughout most of the 1.9 period and the transition didn't really complete until late in that period or early in the 2.x period, and 1.8.7 was getting maintenance releases for four years after 1.9.1 stable release.)
> Most of them took me a couple hours to a few days.
Since 2008, the automated tooling has vastly improved. I'd bet most of those "few days" issues would either not exist or take hours given today's advances in the automated code translation and new knowledge on stack-exchange.
Most of CGI’s customers (if it’s the CGI I’m thinking of) are used to supporting software ecosystems like Fortran, Java, and COBOL for 20+ years on major releases. 10 years is “mid-cycle” comparably. When the IRS pays Microsoft millions and millions to keep supporting Windows XP, these “niche” organizations may be small in actual engineering persons involved but from a capital perspective extremely overweight.
Once again, I don't say Python should not move on, actually, as a CGI "dev" (Technical Director we say) myself, I would LOVE to trash 2.7 and move on.
I was just rambling about "most popular packages" which are only web/network/academic oriented while a huge part of the Python user base don't use them at all. It's just something the "Python community" can't see because most of them are web/network/internet/academic dev (and maybe it explain why they thought it could go well with breaking compat', internet bubble?).
I really want Python to move. I had the opportunity to discuss with some vendors involved in the industry they told me Python 3 switch was a running gag... I hope 2.7 will be such full of security hole my industry will finally do this once for all and stop joking (they will never move on instead), but please stop tell "most popular packages" represent how Python is used as if it was what should been seen, it's definitely not.
TBH, I think Python community give great tools for transition and write close compatible Python 2 and 3 code is not that hard.
Once my industry will be 3+, as a dev, I will love to follow the deprecation period.
The web world indeed can rapidly change their entire stack from LAMP to MEAN to Dockerized Q-Basic CGI scripts and load balance their Twitter clone. There's challenges for sure, but pushing updates is simple and quick, and you can always throw more cloud if you run into walls. If your complete platform rewrite fails, just roll the load balancer back over and try again tomorrow.
It's a lot more difficult when your Python application is communicating with a temperamental motor controller over an ancient RS-232 link with AT commands.
There are only ten of these devices in the world, and three in deployment. They perform critical functions, and they need to run 24/7 without failure or human interaction. (Did I mention these devices are remote, and it takes an afternoon to go out and investigate a failure?)
Every single failure results in a few lines of change in the code base. Those lines cost quite a bit -- both time, money, and loss. It's quite stable now, but that stability came at a price.
But haha, what am I talking about? Geez, I've had ten years to move off this "dead" platform. Let's run it through 2to3, or perhaps try a port to nodejs and deploy it tomorrow.
The attitude of most python3 devs, which can be surmised as: "I moved my Twitter clone from python2 to python3, and it was easy. What's the big problem? You've had ten years to get this done! Hurry up, you're hurting my productivity."
Just curious, what parts of your code are not portable to Python 3? The biggest barrier usually seems to involve strings/Unicode. Or are you relying on packages that don’t have Python 3 support?
In any case, it sounds like you have a stable Python 2.7 system running, so what’s the issue? Nobody’s going to come and take your code away from you.
Easier said than done. This isn't the web world, things aren't so simple.
How do I mock a temperamental motor controller responding to AT commands? I mean, I'm sure it's hard enough to replicate the oddities of their serial port parser, but the temperamental part is gonna be tricky. This is an embedded system I'm talking to, which is parsing bytes coming over a serial port in an interrupt service routine at 16 MHz with 4 kB of RAM. Not an async callback with JSON in a nodejs app with GHz of processing power and 8 GB of memory. Emphasis on: temperamental.
Oh, and don't forget. This is a mechanical system, not a website. We'll have to put torque and strain gauges on the system to make sure it's doing what we want.
The skinny is, the system my python scripts talk to is far too complicated to just "stub out". You may be able to test 90% of the cases, but what you really worried about is that 10% of freak accidents that break things. Look up the Therac 25 and how that failed.
One can separate the I/O code from the application logic, and do data-driven testing of the application logic. Use recorded data from real-life systems to build a (partial) model of the system, at least enough to test individual interactions or short sequences of interactions. Capture this data continiously in production systems, so that when an incident occurs it can be cut-and-pasted to form a new testcase. The latter is critical to build up a library of regression tests for those nasty things one had to find the hard way - so that at least it only happens once.
With this basis you can optionally go to hardware-in-the-loop testing, where some of the same testcases run with real hardware instead of fake responses. This should then be instrumented with neccesary sensors/code to be able to observe its behavior. This ensures that your mock and system stays in sync, and increases coverage for non-deterministic cases. At least one such system should be hooked up to your CI server to test every revision of the software, to catch issues early.
When building the next product, consider shipping the instrumentation neccesary for QA into all products. That way the testing system is closer to real system, and one can integrate self-checking, failsafes and error reporting in production systems. This can be used by final QA (before shipping), by engineers debugging on-site and as part of periodic (or continious) health checks on the equipment.
I've done such things for electromechanical systems, though nothing with a potentially-dangerous motor yet.
Are you afraid of network security attacks to your embedded hardware? If not, then you probably don't care about missing security updates. What are you concerned about with python2.7 no longer being supported? Like, are you worried about getting python2.7 installed on a new piece of hardware five years down the road? I understand you dislike the tone from the python3 devs, but I don't know what you would like instead.
Fortunately, on the web, things never go wrong. Hardware never fails. Traffic never spikes. You never face DDOS attacks or people actively trying to break your security. The network is never unpredictable. It isn’t a complex distributed system consisting of hundreds of interconnected parts. Every potential failure can be anticipated and easily tested against. And every component scales effortlessly as the system scales by orders of magnitude. A lowly web developers could never understand the Herculean feat of writing a Python script to communicate with a motor controller over a serial port.
I don't understand your argument. If you don't use any of those packages, then indeed it does not matter. As with all decisions, whether this argument applies to you depends on your situation, but in my experience incompatible packages were often the most important reason not to choose 3.x.
it's really not a big deal. If you absolutely need legacy python support, you will very likely be able to solve it using a virtualized or containerized environment.
i can feel about the same time for Java 8 to Java 11, where 9 and 10 are transitional and with a really low level of engagement, especially for thrid-party libraries, kicking the jar-hell again
I think one of the biggest mistakes is that 3.0 didn't have "PREVIEW RELEASE" right in the name. People started counting the day it was released, meanwhile the developers didn't have any expectation that (many) people would use it in production.
We've switched from 2.7 to 3.4 (and upwards to 3.5, 3.6) in about a month or so. It delayed some deployments but nothing serious.
If you do it step-by-step it's not that hard. The worst part was str/binary conversions but afterwards everything was a lot easier and better to maintain.
If your company is big enough you should maintain and keep it up-to-date, not "it just works"(TM) and hope for the best after 10 years. The death of 2.7 was already in the air for years.
It's not that hard unless you had some very advanced code utilizing Python 2 internals that were significantly changed in Python 3. However, conversion takes a lot of time and if something works, why should you be forced to change it?
Because non-IT users (your customers) are sick and tired of apps from 2000 that run clunky and ugly because someone is "forcing you change it" (aka doing your job) and you don't feel like it. Yes,
I know some extremely well-written apps from 2000 without trendy obfuscating flat interface, and many trendy albeit unusable monstrosities from the past 3 years. What kind of argument did you make?
Embracing pure technical churn is no ones job. That's why Python3 is viewed as a problem by so many. People embrace true technical innovation, which it is not.
IE6/7 objectively sucks. One could argue Python 3 sucked more than Python 2 based on performance benchmarks even year ago. IMO both Python 2 and Python 3 are pretty messed up languages with similarly messed up libraries, and "pythonic" way is often meaning "idiomatic" in the ugly sense of the word. But I do Deep Learning, data science and stuff with Boto/MWS on Amazon, so I have to stick with it. However seeing all the warts Python 3 throws at me and a crowd of people nagging to switch all the time, I frankly don't see much value in switching from one set of warts to another set of warts, waste the most precious thing I have - time, and all that just to duplicate what I already had (and disliked developing in Python anyway, but it was the fastest way).
Not referring to parent specifically. By "you", I'm referring to all python 2 hardliners out there.
Nobody's telling you to change your codebase. You're free to carry on using python 2.x. If it already works, it will continue to work in its current environment.
So why are you attempting to tell others in the python ecosystem, be it core maintainers, package authors or OS vendors, support your continued use of python 2 for free?
Nobody is forcing you to do anything. The developers of Python aren't volunteering their labor to continue to support older versions of Python. RHEL is deciding not to continue to support older versions of Python past their EOL per their developers. Seriously, Python is free software, libre and gratis. You are completely free to do whatever you want to with Python 2.7, except expect other people to do the work for you.
yeah and 90% of business code is also written in COBOL and another 90% in MUMPS, but somehow the world continues to work despite the fact that they've been pining for the fjords for decades.
There is obviously a massive jump from COBOL to Python, and small but annoyingly incompatible step from Python 2 to Python 3. I hope you understand the difference. It's not like Python 3 was the best language ever, it's just a variation (and IMO not that great one to justify breaking backwards compatibility) of the same theme. Ask Perl guys how they liked a similar situation.
If I didn't need to use Python, I wouldn't. Now why I have to rewrite my old code to v3 if it works? Why do I have to waste time with such a stupid thing? All my new code is v3, but why is somebody nagging me about v3 all the time when v3 is full of warts and mindblowing conceptual holes as well, and doesn't really address multi-threading (GIL love forever) etc.? Coming from C++/Java to Python world it was like throwing away a lot of powerful stuff in exchange for faster time to write. Didn't expect that would be dragged down by additional time to rewrite because of some half-baked API-breaking changes.
You don't have to rewrite it, just like you don't have to rewrite your COBOL or MUMPS or RPG or any of the other all-caps languages.
In another 20-30 years Python may very well be added to that list (though less annoyingly capitalized). No one will have to rewrite their Python into whatever the new thing is then either.
But not re-writing ProgramX into LanguageX+1 is not LanguageX's authors' problem or CompanyX-no-longer-using-LanguageX's problem.
If they called Python 3 e.g. Cobra or Boa Constrictor or even Turbo/rapid/mega Python or similar nonsense, nobody would likely complain as it would clearly distinguish itself from existing ecosystem and all the enthusiasts could jump on the wave, boasting how much better it is, that it comes from the original authors of Python and then go through usual Darwinian selection to see if they prevailed. But breaking compatibility in a major way doesn't make a lot of people happy, when the gains are small, as with v3.
Yes they would. The apocalyptic doomsaying that would have happened if Guido had said "oh hey guys we're going to stop developing python here in a bit" would have been off the fucking charts.
A similar situation to Python 2->3 in Perl is from Perl 5.6 to Perl 5.26 with the difference being that Perl maintains mostly backwards compatible.
If you are talking about Perl 5->6 that is more like going from C++ to D. (With a touch of Haskell+Go mixed in)
For several years we have been considering the two current Perl's as sister languages. Both are being actively developed with a yearly stable release for Perl 5 and a quarterly stable release for Perl 6. (Rakudo Perl 6 is mostly written in Perl 6 or subset language, and is also a newer codebase; so it is easier to change without breaking things.)
Ansible is an application not a library so it is not a problem. Python allows to install multiple versions at the same time, for example you can have Python 2.7, 3.3, 3.4, 3.5 and 3.6 installed side by side and all will work fine without issues.
So you can have Python 2.7 installed for Ansible and 3.6 for your application.
I use Ansible via the API. I want to do it Python 3. I moved the ansible directory from /usr/local/python2 ... to /usr/local/python3 and then squashed out any bugs by hand.
It works! I'm using Ansible via the API in Python 3!
> "- Python 3.7 performs about as well as 2.7 with future release expected to be better"
are you sure on that? my laptop (2.7 ghz 4 cores, 16gb ram) would fall behind my desktop (3.8 ghz 6 cores, 64 gb ram) because it would be paging things in and out of ram, but once the models started running (like XGBOOST), py2x was about twice as fast.
I am not stating this with difinity, because I was not trying to benchmark the two. Just that I casually noted my laptop would always finnish running the same code in 1 hour vs over 2 hours on my desktop.
Why would previous versions of Python3 (e.g., 3.5, 3.6) not perform as well as 2.7? What about Python3 makes it slower? Is it a language design issue?
We’re in the process of migrating several keras/TensorFlow-based projects to 3.6, and I’m struggling to get my emacs environment setup to handle Python 2 and 3 code simultaneously.
Django also recently dropped python 2. I think it's nice that we're finally seeing support for Python 2 end. We've been in a frustrating middle ground of supporting both for too long now.
This is un-broken as of 2018-03-09[1,2,3]. `brew install python` installs python3, but it only installs a link to this at `/usr/local/bin/python3`, not `/usr/local/bin/python` (as it did for a while).
In other words, `brew install python` now complies with PEP 394[4].
$ brew install python
…
==> Caveats
Python has been installed as
/usr/local/bin/python3
Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to
`python3`, `python3-config`, `pip3` etc., respectively, have been installed into
/usr/local/opt/python/libexec/bin
…
$ ls -l /usr/local/bin/python{,2,3}
lrwxr-xr-x 1 osteele staff 38 Mar 22 11:23 /usr/local/bin/python -> ../Cellar/python@2/2.7.14_3/bin/python
lrwxr-xr-x 1 osteele staff 39 Mar 22 11:23 /usr/local/bin/python2 -> ../Cellar/python@2/2.7.14_3/bin/python2
lrwxr-xr-x 1 osteele staff 34 Mar 31 12:27 /usr/local/bin/python3 -> ../Cellar/python/3.6.5/bin/python3
pyenv is your friend. It manages the installation of multiple versions, and you can choose on a per-pipenv basis which one you want to use. A certain codebase requires 2.7.10, not 2.7.11? Pin it to 2.7.10 and it'll stay there forever, even if you upgrade the system or homebrew python.
You can't "there fixed it" for the infinite number of projects that exist in the field that this breaks. "It never worked on FreeBSD" is very different from "it no longer works on new versions of RHEL".
Even though it obviously still works, using "python" is no longer the recommended way to launch Python applications. You are supposed to use "python2" or "python3" if you care about which version you run with.
Python applications are second class citizens at HostGator and other cheap hosting services. Most people using Python applications are using VPS, containers or PaaS because deploying Python applications is a lot harder than just "drop the files" somewhere like it is with PHP.
If this is an issue for you, I would highly recommend that you have a look at virtual environments via tools like virtualenv and conda. That way you'll be able to run all the versions you like.
I've found that if I set my default Python version to a Python 3 virtual environment, it breaks some system tools on my Fedora system (probably just because my virtual env doesn't have the necessary libraries installed, though I haven't tried to install them to test). Specifically gnome-tweak-tools, but probably others that are built with Python. Sort of a funny quirk.
That's not me saying "don't use virtual environments", just be aware you'll probably need to switch back for some system tools.
This is strong evidence that something is set up wrong in your system. pyenv doesn’t change /usr/bin/python. If an app is using your PATH to find python it is going to break.
The apps in question use a shebang of `#!/usr/bin/env python3`. I believe it is working as designed. This is a Fedora system, which switched to Python 3 a couple of revisions back. But, I would assume this would affect any system that has utilities that use that shebang.
Edit: Out of curiosity, I grepped for other system utilities that might have this quirk, and it seems like nearly all Python utilities call /usr/bin/python3 directly, but ten of them (on my system) use `env`. That might be a bug worth filing with the Fedora folks. I can't imagine they want it to act this way when a custom Python is installed. Though it's been that way for at least a couple of releases.
I think Conda allows installing different versions of Python (good) but it doesn't play well with Pip (bad)? Last I tried it also bundled dependencies in its own particular way (ugly)? I think it was more geared towards SciPy and NumPy and other scientific development than towards general purpose programming.
> I think Conda allows installing different versions of Python (good)
Yep. Also allows installing different versions of node or ruby.
> Doesn't play well with Pip (bad)
You can `pip install` things into a conda environment but I forget how it works with globally pip-installed packages. I don't think it plays nicely with virtualenv.
> Last I tried it also bundled dependencies in its own particular way (ugly)? I think it was more geared towards SciPy and NumPy and other scientific development than towards general purpose programming.
Yep, it was made by the maintainers of SciPy and NumPy to solve the problem where you have a python package that depends on a system library written in c/fortran/IDL. For an example in ruby, see the problems people have with charlock_holmes: https://stackoverflow.com/search?q=charlock_holmes
You can install things with pip but you have to be really careful, because Conda tries to do clever things to avoid downloading multiple copies, which is what you end up with in Virtualenvs. So, if you have two environments both using the same version Jupyter version, installed with 'conda install jupyter' it will only have one copy which is symlinked into the environments.
Problem is, if you do 'pip install' in one of the environments, it will upgrade the existing version, but overwrite the conda installed one, which will break your other environment. Often though the official conda packages are lagging behind those on pip, so you have to do this anyway, or use Conda Forge which gets more regular updates or use pip (but carefully uninstalling all the conda installed stuff in the environment first) if you want the bleeding edge.
The pip that conda and conda-forge install now have a two-line patch that fixes this. Newly fixed within the last month or so. (It's in our queue to push it further upstream.) Just make sure you `conda update pip` instead of `pip install -U pip` and you'll be fine now.
The focus of my current right-now dev work is to teach conda to better understand non-conda-installed python packages so we respect what's already there even if it's not a conda package.
That would be a humongous boon, especially for Windows. I really wanted to use Conda, but I really, really needed Pip for packages. I don't really remember what blew up when I tried it, but it wasn't pretty.
Glad to know that soon enough that will be just a memory :)
They're two different languages. I don't see why python3 "replaces" python2 anymore than say ruby does. IMO they should have separate namespaces and invocation.
They do -- it's trivial to have python2 installed on the same system as python3. The deal with RHEL is that they will no longer include python2 packages at all.
That isn't really to say that somebody else won't make packages.
>The deal with RHEL is that they will no longer include python2 packages at all.
Generally, if you are installing your own application you will use a virtualenv with all the packages closed off from the base operating system and dependencies installed with pip (we're all doing this, right folks!?).
They share a special relationship, don't they? The difference to Python is though, that you're restricting yourself A LOT more when writing C and C++ compatible code (you're basically writing C with a few exceptions).
The same compilers and tools are used for both, many developers mix the two in one codebase, and use small features from C++ in "C programs" because the compiler allows them to. For example before recent C standards, function overloading by argument types or declaring variables in the middle of a function.
But Python2 and Python3 are way more similar than C and C++.
Let's hope this will finally change the "compatibility with py3 is a feature" to "no py3 compatibility - no library" state of things. It's really surprising how stubborn some teams are.
What's amazing is that pip3 gladly downloads and tries to install py2 stuff. Because package/dependency management is a complete afterthought in Python.
And sure, it'll get better, and finally we have lockfiles (Pipenv), and maybe eventually a proper SAT solver will help resolving dependencies https://github.com/pypa/pip/issues/988#issuecomment-36084645... , aaand gradual typing is nice too, and maybe eventually we'll have a proper async library (python-trio, as asyncio is there, but not low level and/or not ergonomic enough).
Yet Python is free and open, and amazing, and I haven't contributed much other than report bugs, so I'm not complaining, just comparing to other ecosystems.
Conda has a silver and has been available for however many years. It also works well with pip.
Among people complain about python package management, I have never heard an argument against Conda other than “I don’t like that they recompile base python”. What’s your reason to discount conda?
Our python is compiled with the latest versions of gcc (on Linux) and clang (on macOS). So you're getting gcc 7.2 compiled python on RHEL/CentOS 6. And all of the security and performance enhancements that come along with it [1]. (Ok we're still on 7.2; we'll update to 7.3 in the next couple months.)
P.S. If you're looking for a latest and greatest C/C++ compiler toolchain that's backed with "production" testing via hundreds of millions of package installs and works on older versions of linux...
It's really irritating in the scientific programming world at the moment because we've had some dependencies take years to get to Python 3 compatibility and are they're now being all holier than thou about it all.
We couldn't even start doing the work until our biggest dependency (which we use in 90% of the code base) was ported to Python 3 and while that's been available for about a year, they weren't distributing it and you had to compile from source to get the Python 3 version - and compiling this particular package, which shall remain nameless, is one of the most difficult compilations I've ever had the misfortune to try. It's notable by it's absence on the UK national supercomputer because of this, for example.
So now, we're trying to port our code to it, but it's likely going to take 6 months to a year to do so. Our day job is doing scientific research, and that's what brings in grant income - so for us, it's not a huge priority to spend all our time working on the switch to Python 3, because we can run everything in a Docker container with fixed dependencies and it should continue to run fine for the next 10 years. It will happen, it's just not the most important thing on our list.
It’s not that surprising. People want to work on their tasks of interest. Smart programmers allocate some time for maintenance activity but py3 is a rewrite that is well outside the boundaries of normal maintenance. When the first thing you run has syntax errors and you might have to change almost every file to make any progress, it’s a rewrite.
And plus, many projects are maintained by people in their free time, often for free. Of course no one wants to spend a ton of free time redoing everything only to work apparently the same as before.
Ah. Still weird to say that "Python 2" is deprecated as of RHEL 7.5 when both the installation frontend and yum use Python 2.7.5, and Python 3 isn't even on the installation media.
(Grabbed the ISO and made sure the above assertions are true.)
I think this has been the ugliest, most badly managed update for a major language since Perl6. And this is not completely over, yet. Over the next 5 years I still expect to see people complaining about this Python3 thing. I hope the designers have learned a lesson.
People used to complain about the way Python changes were breaking their code. So it was decided to restrict all the non-reverse compatible changes to a separate version called 3 and to stop breaking things in 2. That probably allowed Python to become more popular than it would of otherwise, but at the cost of a significant fork.
So what are you going to do? Either way has downsides.
And it isn't like developers really have a lot of power to guide the course of a language. Lua is an excellent example of this. If you want to write a Lua program you probably want to target version 5.1 . The developers have released 5.2 and 5.3 but uptake has been fairly minimal and it's a problem because 5.3 is a Python3 like fork. You can lead but that doesn't mean that anyone will follow. The Python 3 thing could of gone a lot worse (or better) depending on what you think of Python 3.
Ultimately the issue is that different people have different ideas about where a language should go (if anywhere) and no faction of developers or users have ultimate control. It isn't anyone's fault, it is everyone's fault.
Seems like the best approach here is to not introduce breaking changes to begin with. Go has done a very good job at this. And AFAIK most of the breaking changes in Java have been the introduction of new keywords and have been relatively easy to manage.
With one main difference being that we are perfectly content to let people continue to use Perl 5. (Most of us anyway)
Another is that Perl 6 brings in many features from other very disparate languages while making them seem as if they always belonged together. It doesn't seem like Python 3 brings that many new features.
I think the process was run about as well as it could be. It's just very, very hard to change a language like this, particularly something as fundamental as the type of strings. In retrospect more time should have been put up front in transition frameworks like six and for some of the compatibility affordances added to Python 2.7 and ~3.3; it took awhile for folks to figure out the best way to write code that supports both languages.
Great! Mostly no problem. The backwards-incompatible changes are few and seem to rarely be used in the projects I've seen.
> PHP <= 7
Totally different story. PHP does upgrade slowly, but many of the backwards-incompatible changes are almost impossible to detect without 100% code coverage. A lot of legacy PHP < 7.x code bases are also going to be lacking perfect coverage.
That said, I think bespoke PHP is less common than people think. The WordPress/Drupal/whatever installations are major drivers of the usage numbers for PHP.
I believe you did that conversion and that it worked for you. I still don't think it's a slam dunk when you're talking about a stable, production product that can't have errors. It's almost impossible to do the conversion with static analysis alone, unless you know something I don't.
> We're expecting to hear more about RHEL8 next month. Other current expectations are that Btrfs will be completely gutted in favor of the company's own Stratis Linux storage tech, the workstation session using GNOME on Wayland by default, shipping with the GCC 7 compiler, and possibly shipping with the Linux ~4.14 kernel. We're expecting an alpha sometime soon and would be perfect if announced at May's Red Hat Summit 2018.
IMHO they are waiting for Modularity to mature https://docs.pagure.org/modularity/ since I really think the existing RHEL support model is becoming a tougher sell in today's environment.
I've recently started writing all my scripts in Python 3 as I need to future-proof some things. And it just works now. No problems. So if you tried Py3 years ago and had issues, try again today.
It's for system use. You should never depend on the vendor-supplied compilers for anything, because their goals are different from your (the developer/application-wrangler's) goals. Always develop in a defined, controlled virtual environment, period. If you want be stylish, there's a fancy new thing called a container in which you execute your application in defined, controlled environment. :-)
This is really by far the most important point in the thread. I understand the points about embedded use cases and all the other stuff that people bring up when the flames come out in 2 vs. 3. But this is specifically about RHEL.
If you're using RHEL, the point here is almost entirely moot. You should never have been depending on system python for anything important anyway. If RHEL replacing Python 2 with Python 3 breaks any part of your code or your company, you have already made a huge shit sandwich for yourself.
The fix, however, isn't that bad. Just start using virtual environments. It will take you a while to unravel hidden dependencies, but it's doable in a reasonable amount of time.
The Internet may switch to 3 but industry will stay on 2.7 for the next decade and no EoL will change that. There is absolutely nothing that can change that because there is zero benefit to rewriting a decade's worth of code and no manager will authorise that.
So don't bin those 2.7 books just yet if you want a job at a big co.
To some degree, you may be correct, that there will be companies that refuse to upgrade for many years. By and large, I think most people will start to switch:
* Small orgs will begin to see costs of maintaining legacy code skyrocket as it becomes harder and harder to get 2.7 interpreter support for newer kernels. Those that aren't already transitioning now will eventually bite the bullet.
* Medium orgs will probably be the laggards. They have enough funds to pay someone else to make compatible interpreters for them. Your observation about manager authorization very likely applies here so many probably won't bother to upgrade without an internal skunkworks-style initiative.
* Large orgs will upgrade. Their infosec departments will freak out that an old, "potentially insecure" language is being used, regardless of third party vendor support. I see this a fair bit now in the PHP space; where RHEL supports and backports patches for old, insecure versions of PHP, but the infosec people still can't stand it. These days, infosec is getting more and more pull in every huge organization, so it wouldn't surprise me at all to see them start to treat 2.7—or the old, un-updated packages that are locking someone to 2.7—as a possible attack vector and force a change.
All that said, you are right about jobs. If someone knows 2.7 inside and out, they will start to see higher and higher paying contract gigs over the next 15-20 years. Just like the COBOL programmers saw.
...and then there's Mega Large orgs, like Google, who are used to maintaining their own software.
I am super curious what Google will do. The thing to watch is whether Chrome/Chromium (and therefore Node.js) can ever be built without using Python 2.7.
Mega large orgs have already moved, in some cases. They have the advantage of being able to throw significant resources into infrastructure to make switching easy.
Part of it is that Python 3 does not offer any really strong reasons to switch to it, in an existing code base. It is definitely better and rounds out a bunch of small problems and inconsistencies that exist in Python 2. It also does have some cool new features. However, since Python 3 is not compatible with Python 2 (there is very good tooling for automatic converting and supporting both in one code base) many developers have not taken the time to port their code.
Python 3 broke backwards compatibility with Python 2. Most of the usefulness of Python comes from the fact that you can import modules. Since all of the existing modules had been written in Python 2 and would not work with Python 3, most people ignored the release of Python 3 and continued using Python 2.
I started using Python in early 2016, since then I have looked at hundreds of packages on github and pypi and I can't remember one of them being strictly Python2.
The "2 to 3 dilemma" doesn't exist in my life, even though I use Python on a daily basis. I guess it's because I'm late to the game and I'm not working in a big corp with lot's of legacy code. Or maybe it's the field? I use Python for web stuff and tooling.
Anyway.. in my life Python2 is history. The only time I hear from it is either threads like this one or code examples on old blog posts.
> can someone explain why Python 3 has never replaced Python 2?
I think this is inaccurate. Python3 may not have replaced Python2 for everyone, but it looks like it has replaced Python2 for most projects on github.
Because Python 3 existed for 10 years now, and a lot of new Python 3 features were backported to 2.7. In fact all new features in 2.7 were copied from Python 3. People essentially said, "why should I migrate to Python 3 when it doesn't offer anything new and I don't care about unicode at all (in fact I like when I have to explicitly state what's unicode and what's not)"
Only after PSF declared in 2015 that 2.7 is maintenance-only and won't get any new features Python 3 picked up.
Many people say that reason for this is breaking changes, but the truth is people are lazy and they won't do extra work to convert until they get kick in the butt. Other languages did have breaking changes and people move on, because the old version was deprecated.
With Python for a long time not declaring EOL for 2.x people assumed 3.x will never happen and started resisting it. The FUD started and people were afraid of even trying it. It got to the point that someone was planning to release Python 2.8 that would backport all changes from Python 3 except unicode.
For us the primary reason is because not only is Python 2 the default in RHEL 6 and RHEL 7 (and thus downstreams like CentOS and Scientific Linux), they don't even ship with Python 3 or have it in the base yum repos, so getting it installed on our thousands of diverse systems is nontrivial.
Furthermore we have a large base of scripts, owned by many people, that use Python as they would use bash, so changing the default Python on them will be like linking /bin/bash and /bin/sh to /bin/csh.
This is good, but it's sad that #!/usr/bin/env python will no longer work as the go-to shebang since python won't exist, only python3 will exist by default.
Geofft[1] has a nifty solution but it's not as cross platform as the old tride-n-true #!/usr/bin/env python:
I'm a Rubyist. Why are Python users so doubled down on keeping on with Python 2? I can't imagine using Ruby 1.8.x at this point, and even older versions seem well outside the realm of consideration.
There are a few major convention changes, most notably that print is now a function and must be formatted differently to work. People are using libraries that aren't being updated because some people don't want to go update their old work, especially scientists who made the libraries as part of a paper years ago and haven't touched it since.
Same with virtualenv, but without users of scripts using pyenv or virtualenv the shebang is broken if it's pointing to #!/usr/bin/env python. Many users of scripts aren't python developers, so they won't be using pyenv or virtualenv. Just wish we had a cross-platform way to run python scripts reliably like we did when /usr/bin/python always existed.
FWIW I literally just got my boss on board with switching last week. There are still a lot of stragglers who are stuck thinking that they're going to have library compatibility issues with Python 3.
> I'm a Rubyist. Why are Python users so doubled down on keeping on with Python 2?
There's a lot of good reasons Python 2 was stickier than Ruby 1.8.x, the biggest one of which is that Py 2 had had much broader use over a longer time than Ruby 1.8, which meant there were lots of people depending on lots of basically “done” code with non-active maintenance in lots of little but overlapping niches, while Ruby was mostly Rails and a few other big actively-maintained ecosystems that amounted to essentially the whole community.
And the move off Ruby 1.8 still took years, was often bitterly divisive, and with plenty of talk of need to fork 1.8 and continue maintenance because large chunks of the community would never move (which, as with Python, continued even after most of the community had already moved.) It's just that's all a few years back (a lifetime in tech industry time) so most people forget that it happened.
reply