I don't understand why people keep talking about multiple dispatch like Julia invented it. You can do that in many other languages, even languages designed for numeric computation. What's cool about Julia is that it has brought 90s compiler technology to scientific computing: a field which still thinks MATLAB is a really good way to develop and communicate scientific ideas.
Multiple dispatch turns out to be truly awesome for mathematical code. In my opinion, the focus and careful selection of features for technical computing, while being a general purpose programming language is what differentiates julia from other dynamic languages.
Julia is the first language to really show that multiple dispatch can be efficient in performance-critical code, but I'm not really sure why: JIT concepts were certainly familiar to implementors of Common Lisp and Dylan.
Julia is a fairly elegant nice language and multiple dispatch makes a lot of sense in all sorts of places.
I don't really get why Julia is pigeon-holed for numerics. I prefer its syntax to Python and would consider it for scripting applications where I use Julia now.
Julia is such a wonderful language. There are many design decisions that I like, but most importantly to me, its ingenious idea of combining multiple dispatch with JIT compilation still leaves me in awe. It is such an elegant solution to achieving efficient multiple dispatch.
Thanks to everyone who is working on this language!
But the performance relies on the aggressive specialization which depends on multiple dispatch. And the adaptation to numerical computing, the cleanness and beauty is all about multiple dispatch.
To stay with the bike analogy, multiple dispatch is definitely the wheels, not the motor.
> for one, I 100% agree with your views on variable naming elsewhere on this thread ;)
Waaah! (pulls hair.) And yet, so far apart on the function naming ;)
But seriously, though. Without the incredible polymorphism and genericity, there's really nothing at all left of Julia. Multiple dispatch isn't a feature bolted onto Julia. It is the core philosophy, and the central organizing principle.
But then again, Julia is a relatively new language that aims for high performance in scientific computing while using type inference. Helps to have multiple dispatch that tells you how many definitions there are for a function.
I think people often underestimate (or just plain don't know about) the degree to which a multiple-dispatch-based programming language like Julia effectively implies its whole own dispatch-oriented programming paradigm, with both some amazing advantages (composability [1], and an IMO excellent balance of speed and interactivity when combined with JAOT compilation), but also some entirely new pitfalls to watch out for (particularly, type-instability [2,3]). Meanwhile, some habits and code patterns that may be seen as "best practices" in Python, Matlab can be detrimental and lead to excess allocations in Julia [4], so it may almost be easier to switch to Julia (and get good performance from day 1) if you are coming from a language like C where you are used to thinking about allocations, in-place methods, and loops being fast.
Things are definitely stabilizing a bit post-1.0, but it's still a young language, so it'll take a while for documentation to fully catch up; in the meanwhile, the best option in my experience has been to lurk the various chat forums (slack/zulip/etc. [5]) and pick up best-practices from the folks on the cutting edge by osmosis.
The promise of multiple dispatch is that it allows every library to be generic. One author writes a library that calculates $COMPLICATEDTHING; another author writes a library that implements a new type of number, such as ranges; provided the second library defines all the methods that the first uses, then, magically, they work together and you can calculate $COMPLICATEDTHING on ranges! That is the thing that Julia does that nothing else does.
People write large-scale systems in dynamically-typed languages all the time. Multiple dispatch and macros make clean scaling easier than it would be in most other dynamic languages. Its competitors in numerical performance are C/C++ and Fortran, which are both minefields (C much more so). Julia is definitely safer in practice than these kind of languages with weak, unsafe type systems.
I'm not saying static types don't have benefits as well, but it would also be very against the design goals as a Matlab/R competitor.
Inheritance would also directly clash and overlap with multiple dispatch, which is strictly more powerful.
Multiple dispatch, as in Julia, Mathematica, CommonLisp, etc, does get you much closer to this vision than single dispatch. To an extent, but then you're on your own again.
Maybe consider compilation as a collaboration of a computer and a wetware compiler. You have knowledge and do analysis which you are unable to share with the computer compiler. For silly tiny illustration, you might know some size variable has to be an integer power of 2, but most language's type systems don't let you tell the compiler that. Or think of writing a datatype in old C - a struct and a bunch of functions. You have a mental model of how they should all hang together, and then you mentally compile that down and hand emit C code and tests. Or you could hand emit assembly code and tests instead. But the C compiler is the more helpful collaborator. You can talk about more (with some sacrifices), and more easily. Now given say two structs and a bunch of associated functions, multiple dispatch allows weaving them together without pains which in single dispatch so discourage weaving. Which is better, but still, you've a couple of structs, a bunch of functions, and a still limited ability to express your intent so the computer compiler can help you with it.
So you can have multiple dispatch of + and - , and more easily add new numeric types, but any relationship between + and - is still all in your mind - the compiler has no idea that you think them connected.
I think the difference is that in Julia multiple dispatch is the main paradigm to structure code (together with very aggressive devirtualization/specialization/compilation). That enables quite amazing things. Other languages have multiple dispatch as well, but it is not foundational to the ecosystem in them. They lack the "magic" but they also have lower propensity for the interface mismatch bugs described through these two threads.
That is not a very charitable characterization of the position of multiple dispatch in Julia. It's not something that's optional: multiple dispatch is essential for the performance that Julia is looking to achieve. If you notice where acceleration DSLs tend to have trouble, you'll notice that it's always at the point where you get beyond built-in float primitives and onto object support. For example, Numba's object mode has the caveat that "code compiled in object mode will often run no faster than Python interpreted code, unless the Numba compiler can take advantage of loop-jitting", where loop jitting is simply the ability to prove that some loop is compatible with moving to the nopython mode.
The reason why Julia is fast is because automatic function specialization to concrete dispatches gives type-grounded functions which allows the complete optimization to occur on high-level looking code (see https://arxiv.org/abs/2109.01950 for type-theoretic proofs). It's basically a combination of (1) define a type system in a way that allows for type-grounded functions and compile-time shape inference (shape as in, byte structure of the structs), (2) define a multiple dispatch system with automatic function specialization on concrete types, (3) have a typed IR which proves and devirtualizes all dispatches before hitting the LLVM JIT. If you simply slap the LLVM JIT on random code, you will not get that performance. But now because multiple dispatch is fundamental to performance in the language, the rest of the "game" for the language is how to design an ergonomic language around this feature and how to teach people to use it effectively as a problem solving tool.
You actually see something similar going on in the world of Jax. With Jax, you need to be able to perform abstract interpretation to the Jax IR. In order for this to be possible with the interpreters Jax has, the functions that are being interpreted need to always have the same computational graph for the same inputs, i.e. they need to be pure functions. This is why Jax is built on functional programming paradigms. It would be similarly uncharitable to say the reason why Jax does not embrace OO is because the developers just love functional programming: the programming paradigm choice clearly falls out of what the tools needs to do.
It remains to be seen if Jax is the tool that makes more people finally embrace functional programming styles, or if enough people see pervasive performance necessary enough to change to the multiple dispatch paradigm of Julia. But what is clear is that tools that are moving away from OOP are not doing so arbitrarily, it's all about whether doing so is beneficial enough to justify the change.
I've been using Julia for my thesis research lately, and I find multiple dispatch to be a great match for writing mathematical algorithms, especially because of the ad-hoc specialization that it allows based on all parameter types. OO approaches always seemed weird for this sort of thing, where you'd have to decide which type "owns" the ability of how to work with the other types.
What I wish for:
- faster lambdas and which capture numbers as values not as addresses. I left Python over the latter, it caught me up all the time. At least Julia developers don't consider it a feature and plan to fix it.
- to be able to specify function domain and range types, for documentation and sanity checking
- maybe some improvements in comprehensions would be nice, so I could have an Array of Arrays of something e.g.. Currently the [x for...] comprehensions tend to collapse things together while the {x for ...} comprehensions tend to lose the types of what's inside.
Overall a great language already and a positive experience.
The key to make multiple dispatch work well is that you shouldn't have to think about what method gets called. For this to work out, you need to make sure that you only add a method to a function if it does the "same thing" (so don't use >> for printing for example). To Dr the benefit of this in action, consider that in Julia 1im+2//3 (the syntax for sqrt(-1)+2 thirds) works and gives you a complex rational number (2//3+1//1 im). To get this behavior in most other languages, you would have to write special code for complex numbers with rational coefficients, but in Julia this just works since complex and rational numbers can be constructed using anything that has arithmetic defined. This goes all the way up the stack in Julia. You can put these numbers in a matrix, and matrix multiplication will just work, you can plot functions using these numbers, you can do gpu computation with them etc. All of this works (and is fast) because multiple dispatch can pick the right method based on all the argument types.
Indeed, Julia's abstract type system with multiple dispatch is its killer feature. It enables generic programming in a beautiful and concise fashion. Here [1] Stefan Karpinski gives some explanations of why multiple dispatch is so effective.
One other really good example is BLAS. Since it is a C/Fortran library you have 26 different functions for matrix multiply depending on the type of matrix (and at least that many for matrix vector multiply). In Julia, you just have * which will do the right thing no matter what. In languages without multiple dispatch, any code that wants to do a matrix multiply will either have to be copied and pasted 25 times for each input type, or will have 50 lines of code to branch on the input type. Multiple dispatch makes all of that pain go away. You just use * and you get the most specific method available.
reply