Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

cpython is faster than c++ under the following circumstances:

1) when you're using native libraries via python that were written by better c/c++ programmers than you are and you're spending most of your time within them

2) when you're using native libraries in python that are better implementations than the c/c++ libraries you're comparing against

3) when you don't know the libraries you're using in c/c++ (what they're talking about here)

...otherwise, if you're just using doing basic control flow optimizing compiler c/c++ will almost always be faster. (unless you're using numba or pypy or something).

point stands about the constants though. yes, asymptotic analysis will tell you about how an algorithm's performance scales, but if you're just looking for raw performance, especially if you have "little data", you really have to measure as implementation details of the software and hardware start to have a greater effect. (often times the growth of execution time isn't even smooth with the growth of n for small n)



sort by: page size:

I actually have some examples that run faster in CPython. However, that's mostly idiomatic -- the code can be written to favor one interpreter or the other in terms of speed.

I don't think its correct to say the CPython is slow. What you can more accurately say about CPython is that the performance is highly variable. Some things are very fast, while others are comparatively slow.

The slow things tend to be the sort of numerical loops that you see in micro-benchmarks. It's no coincidence that the version of Python in the linked article saw its greatest speed up in a numerical loop, but only modest improvement elsewhere. It's exactly this sort of simple repetitive operation where interpreter overhead matters the most.

Language features that encapsulate complex functionality tend to be harder to speed up in CPython because the VM operates at a fairly high level. In effect you're just kicking off a large subroutine that is written in C, and you're really executing native code until that operation is complete. You're not going to improve very much on that no matter how much you try.

What this means is that speed will depend heavily on the type of application program being written, and also on how much the programmer takes advantage of the unique language features. It also makes realistic cross language benchmarks difficult because the right way to do something in Python may not have a direct equivalent in another language. The result tends to be "lowest common denominator" benchmarks, which are exactly the sort of algorithms which CPython does worst at.


The "Python is the language, not the implementation" answer has already been given by @dalke, so I'll answer whether cPython, the implementation of Python that Python developers commonly use, is slower than C/C++:

cPython is necessarily slower and uses more memory than C for most tasks, and that's because cPython is implemented in C. You couldn't expect a new programming language that was implemented in Python to be faster than Python, either.

This is less of a concern for scientists and others with very tough performance requirements than it otherwise would be, in part, because cPython code can delegate C and C++ code[0]. This means portions of your program that need to be highly optimized can be implemented in C or C++, while the rest of your program can continue to be written in Python.

[0] https://docs.python.org/3/extending/extending.html


Faster than CPython? ;)

Faster that CPython doesn't mean it is fast.

is it faster than interpreted cpython? couldn't find the benchmarks, maybe I'm blind

> It's easy to forget that CPython is not that optimised generally,

"Not that optimised" is actually an understatement. In my experience, CPU-heavy code typically becomes 10-100x faster if you port it from Python to a language like C (without even trying to be clever or using SIMD assembly).


Programming languages aren't slower or faster. The implementations of them are.

Provided no FFI or extensions written in other languages, non-I/O bound interpreted CPython is 50-100x slower than C/C++. JIT compiled (PyPy, etc.) Python might be in 2-10x slower bracket.

If extensions written in C/C++ are used, then you might be effectively comparing those extensions against another C/C++ implementation.

So if your code is going to be I/O bound and/or can effectively offload all computation in modules written in C/C++ (like NumPy), then Python might be practically as fast as C/C++.

If you use pure interpreted Python and implement compute limited algorithm in it, your code is going to be 50-100x slower than C/C++.

Real life scenarios are probably somewhere in between those two extreme cases.


Python was sped up 30x, according to the same post, so the difference between best Python and best C++ becomes "mere" ~170 times.

Of course, on numeric problems like this, plain CPython inevitably sucks performance-wise (no JIT, a lot of lookups, no value types).


Not the OP, but it claims to be ~2x faster than CPython. I haven’t done extensive benchmarking, but for my small projects that seems about right.

By all appearances, the alternatives are only negligibly faster than CPython?!

The post seems to be about execution speed though. However, even there it's definitely not #1 factor as witnessed by popularity of CPython...

Right, the CPython runtime is first trying to be correct (IMHO is also correct). Very few optimizations are made by the compiler/interpreter. This has long been known by Python devs, and exploited by specifically using APIs known to be written in C/C++. e.g. using 'map' instead of a list comprehension. Map was generally faster (at least in the 2.x days) than an equivalent list comprehension.

When people say cpython is slow they are generally pointing to two things

1) The interpreter is _slow_

2) You can't achieve real thread based parallelism

I think in general people have a high level concept of (1) as static vs. dynamic and interpreted vs compiled is an understandable trade off. CPython as an implementation generally gets type casted as slow for its default dynamism which have understandable negative performance implications[1]. However CPython also gives you lots of ergonomic ways to push your program towards the static/compiled end of the spectrum with things like pandas/numpy/numba/extensions. In general using these correctly put you within the ballpark of _faster_ languages, could you write faster assembly by hand? Sure! Is optimizing in another language worth your time? I don't know.

I've never really understood (2) the lack of threading as a problem, multi process parallelism can be accomplished fairly easily if you are CPU bound or projects like uvloop[2] make async tasks fast enough to compete with any web framework out there. Furthermore even though your cpu cycle cost may be levered to a considerable point while operating over hundreds/thousands of servers doing distributed computing right is still hard and developing something like celery/airflow/luigi/dask from scratch is not cheap either. Leaning on CPython's massive ecosystem can massively lower the barrier to entry in a lot of big problems.

I think there are plenty of examples of re-writes in go[3]/rust[4] that have worked great for people, I have no doubt that python is _not_ the end all be all language, but I think the "python is slow" worry is generally overplayed.

Specific problems require specific solutions, I'm glad Haskell seems to work for the author in genetic analysis but think this could have been a more interesting article with some specific python-haskell comparisons rather than the generic python is slow argument.

[1] http://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/

[2] https://magic.io/blog/uvloop-blazing-fast-python-networking/

[3] https://web.archive.org/web/20170101002625/http://blog.parse...

[4] https://blogs.dropbox.com/tech/2016/05/inside-the-magic-pock...


It's 10x slower at a lot of things, but even vanilla cpython is every bit as compiled as something like C#. Python is slow because of an extraordinary amount of runtime dynamism, because it doesn't have a JIT or many optimization passes, and because performance isn't a top concern of the project.

Where do you draw the line? Most of CPython is written in C including the arrays package (https://docs.python.org/3/library/array.html) mentioned in that article.

Yes, pure Python is slower and takes up more memory. But that doesn't mean it can't be productive and performant using these types of strategies to speed up where necessary.


Seems to be 2x faster than CPython, but as I understood it is working on optimizations now.

It tends to considerably outperform CPython on performance benchmarks.

I´m loving the Faster CPython project. Just for a reference, I have a project originally written in (very optimized) Python, that has a Rust version. The Rust version is approximately 150% faster in Python 3.10. In Python 3.11 it is 100% faster. This is something incredible as I would prefer to keep it all in Python.
next

Legal | privacy