I've been using Julia for my thesis research lately, and I find multiple dispatch to be a great match for writing mathematical algorithms, especially because of the ad-hoc specialization that it allows based on all parameter types. OO approaches always seemed weird for this sort of thing, where you'd have to decide which type "owns" the ability of how to work with the other types.
What I wish for:
- faster lambdas and which capture numbers as values not as addresses. I left Python over the latter, it caught me up all the time. At least Julia developers don't consider it a feature and plan to fix it.
- to be able to specify function domain and range types, for documentation and sanity checking
- maybe some improvements in comprehensions would be nice, so I could have an Array of Arrays of something e.g.. Currently the [x for...] comprehensions tend to collapse things together while the {x for ...} comprehensions tend to lose the types of what's inside.
Overall a great language already and a positive experience.
Julia is a fairly elegant nice language and multiple dispatch makes a lot of sense in all sorts of places.
I don't really get why Julia is pigeon-holed for numerics. I prefer its syntax to Python and would consider it for scripting applications where I use Julia now.
Julia supports multiple dispatch OO, which is more powerful than what you get from traditional OO languages. Or do you purely mean the x.foo(y) syntax rather than foo(x,y)?
Multiple dispatch turns out to be truly awesome for mathematical code. In my opinion, the focus and careful selection of features for technical computing, while being a general purpose programming language is what differentiates julia from other dynamic languages.
Those who are attached to the class-based OOP model used, for example, in Python will find the dot notation more natural. But Julia’s multiple dispatch is a superset of this, and unarguably more powerful and flexible. Python OOP leads to monstrosities like
The main issue I encountered as a Julia user is that multiple dispatch doesn't scale very well.
When you start building out a project, it's easy to keep track and debug if multiple dispatch starts failing (i.e. <any> type starts spreading everywhere and Julia slows to Python like speeds).
In medium-to-large projects, it becomes extremely cumbersome to manage this. It's doable, but adds a layer of complexity management to projects that simply doesn't exist in strictly typed or pure scripting languages.
Of course, you can just decide to explicitly type everything - but the issue here again is the lack of enforcement.
In a nutshell: Julia is great when you're a grad student working mostly by yourself on small scale projects! But not so great in prod.
And there's really no problem with that; that's who the language was designed for!
That is not a very charitable characterization of the position of multiple dispatch in Julia. It's not something that's optional: multiple dispatch is essential for the performance that Julia is looking to achieve. If you notice where acceleration DSLs tend to have trouble, you'll notice that it's always at the point where you get beyond built-in float primitives and onto object support. For example, Numba's object mode has the caveat that "code compiled in object mode will often run no faster than Python interpreted code, unless the Numba compiler can take advantage of loop-jitting", where loop jitting is simply the ability to prove that some loop is compatible with moving to the nopython mode.
The reason why Julia is fast is because automatic function specialization to concrete dispatches gives type-grounded functions which allows the complete optimization to occur on high-level looking code (see https://arxiv.org/abs/2109.01950 for type-theoretic proofs). It's basically a combination of (1) define a type system in a way that allows for type-grounded functions and compile-time shape inference (shape as in, byte structure of the structs), (2) define a multiple dispatch system with automatic function specialization on concrete types, (3) have a typed IR which proves and devirtualizes all dispatches before hitting the LLVM JIT. If you simply slap the LLVM JIT on random code, you will not get that performance. But now because multiple dispatch is fundamental to performance in the language, the rest of the "game" for the language is how to design an ergonomic language around this feature and how to teach people to use it effectively as a problem solving tool.
You actually see something similar going on in the world of Jax. With Jax, you need to be able to perform abstract interpretation to the Jax IR. In order for this to be possible with the interpreters Jax has, the functions that are being interpreted need to always have the same computational graph for the same inputs, i.e. they need to be pure functions. This is why Jax is built on functional programming paradigms. It would be similarly uncharitable to say the reason why Jax does not embrace OO is because the developers just love functional programming: the programming paradigm choice clearly falls out of what the tools needs to do.
It remains to be seen if Jax is the tool that makes more people finally embrace functional programming styles, or if enough people see pervasive performance necessary enough to change to the multiple dispatch paradigm of Julia. But what is clear is that tools that are moving away from OOP are not doing so arbitrarily, it's all about whether doing so is beneficial enough to justify the change.
Indeed, Julia's abstract type system with multiple dispatch is its killer feature. It enables generic programming in a beautiful and concise fashion. Here [1] Stefan Karpinski gives some explanations of why multiple dispatch is so effective.
Julia has a very nice type system, the nicest of any dynamically typed language I am familiar with. This is something to do with multiple dispatch, but it's more to do with trying to have a type system that allows all the JIT to unbox all the things that have to be unboxed for high performance without sacrificing the freedom of dynamic typing.
IIUC, Common Lisp is the giant on whose shoulders Julia built in this respect.
The promise of multiple dispatch is that it allows every library to be generic. One author writes a library that calculates $COMPLICATEDTHING; another author writes a library that implements a new type of number, such as ranges; provided the second library defines all the methods that the first uses, then, magically, they work together and you can calculate $COMPLICATEDTHING on ranges! That is the thing that Julia does that nothing else does.
I don't understand why people keep talking about multiple dispatch like Julia invented it. You can do that in many other languages, even languages designed for numeric computation. What's cool about Julia is that it has brought 90s compiler technology to scientific computing: a field which still thinks MATLAB is a really good way to develop and communicate scientific ideas.
Ever since I discovered Julia a few years ago, I’ve found I can’t stand using classes any more (particularly in Python). Multiple dispatch just feels so much more elegant than trying to dispatch everything on the type of an implicit first argument.
I give examples of expressivity in my comment above. The multiple dispatch is completely ubiquitous in Julia, being the default. This allows for the definition of flexible and extensible interfaces similar to Haskell's typeclasses. Macros are used to make things like PyCall or generate fast and flexible CUDA code.
This is not easily possible in Python, especially not with high performance.
But then again, Julia is a relatively new language that aims for high performance in scientific computing while using type inference. Helps to have multiple dispatch that tells you how many definitions there are for a function.
The issue is not that the bugs are with correctness of multiple dispatch, but that multiple dispatch allows you to combine generic programming with abstract data types. Thus, one can have a generic implementation in base Julia, and someone can pass a new user data type - a combination that can easily not work. Some of the frustration also arises from types such as OffsetArrays that are included in the base distribution, but not as well supported and tested as the regular Arrays type. Thus, the discussion here tends to focus on defining interfaces, and of course on better testing of uncommonly used data types.
In general, we've not had a formal roadmap - but we present a "State of Julia" talk at JuliaCon every year. But very broadly, the list (of the top of my head) includes: improving a lot of the underlying compiler infrastructure overall, improving support differentiable programming, improving garbage collection, support for GPUs from multiple vendors (too many of those now), supporting apple silicon, type system support for tools like JET.jl.
NEWS.md is generally updated during the course of a release cycle, which eventually becomes release notes, and then post release, we put together a highlights blog post. https://github.com/JuliaLang/julia/blob/master/NEWS.md
Julia is the first language to really show that multiple dispatch can be efficient in performance-critical code, but I'm not really sure why: JIT concepts were certainly familiar to implementors of Common Lisp and Dylan.
I used to really like Julia but over time have heavily moved to the view that my ideal general-purpose programming language would look a lot like Swift, perhaps with some bits of Julia, e.g. the REPL.
Julia doesn't have a good support for OOP. I don't think FP is the optimal way to solve all possible programming tasks. Usually a mix of both approaches leads to the most simple, maintainable and readable code.
And I'm really not sold on multiple dispatch. In Swift, I would simply implement the join function by adding an extension on `Array<String>`. And the + function would just be, for example, `static func + (left: Vector2D, right: Vector2D) -> Vector2D`. I've never come across a situation where I wished Swift would support multiple dispatch. On the other hand, the fact that Julia doesn't use the object.method() has 2 very clear drawbacks: lack of completion in an IDE and poor readability (example: `object.doSomething(a, b, c).doSomethingElse(a, b, c)` - imagine how would this look in Julia).
But the performance relies on the aggressive specialization which depends on multiple dispatch. And the adaptation to numerical computing, the cleanness and beauty is all about multiple dispatch.
To stay with the bike analogy, multiple dispatch is definitely the wheels, not the motor.
> for one, I 100% agree with your views on variable naming elsewhere on this thread ;)
Waaah! (pulls hair.) And yet, so far apart on the function naming ;)
But seriously, though. Without the incredible polymorphism and genericity, there's really nothing at all left of Julia. Multiple dispatch isn't a feature bolted onto Julia. It is the core philosophy, and the central organizing principle.
What I wish for:
- faster lambdas and which capture numbers as values not as addresses. I left Python over the latter, it caught me up all the time. At least Julia developers don't consider it a feature and plan to fix it.
- to be able to specify function domain and range types, for documentation and sanity checking
- maybe some improvements in comprehensions would be nice, so I could have an Array of Arrays of something e.g.. Currently the [x for...] comprehensions tend to collapse things together while the {x for ...} comprehensions tend to lose the types of what's inside.
Overall a great language already and a positive experience.
reply